home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
SGI Varsity Update 1998 August
/
SGI Varsity Update 1998 August.iso
/
dist
/
patchSG0002826.idb
/
var
/
pcp
/
pmdas
/
irix
/
help.pag.z
/
help.pag
(
.txt
)
Wrap
PCP Help Text
|
1998-07-29
|
160KB
|
2,701 lines
PcPh22
@ irix.mem.fault.prot.total protection faults
Cumulative count of protection page faults detected by hardware (e.g.
including illegal access to a page) and faults cause by writes to
(software protected) writable pages (e.g. copy-on-write pages).
See also irix.mem.fault.prot.cow and irix.mem.fault.prot.steal.
@ irix.mem.fault.prot.cow copy-on-write protection faults
Cumulative count of protection faults caused by writes to shared
copy-on-write pages.
@ irix.mem.fault.prot.steal protection faults on unshared writable pages
@ irix.mem.fault.addr.total address translation page faults
Cumulative count of address translation page faults where a valid page
is not in memory.
See also irix.mem.fault.addr.* for subcounts classified by the place
from which the valid page is found and made memory resident.
@ irix.mem.fault.addr.cache page faults resolved in the page cache
Cumulative count of address translation fault pages resolved in the
page cache.
@ irix.mem.fault.addr.demand page faults resolved by demand fill or demand zero
Cumulative count of address translation page faults resolved by page
demand fill or demand zero.
@ irix.mem.fault.addr.file page faults resolved in the file system
Cumulative count of address translation page faults resolved from the
file system.
@ irix.mem.fault.addr.swap page faults resolved in the swap space
Cumulative count of address translation page faults resolved from the
swap space
@ irix.mem.tlb.flush count of single processor TLB flushes
Cumulative count of translation lookaside buffer (TLB) flushes for a
single processor.
@ irix.mem.tlb.invalid count of TLB id invalidates for a process
Cumulative count of the number of times the translation lookaside
buffer (TLB) ids are invalidated for a particular process.
@ irix.mem.tlb.rfault count of TLB page reference faults
Cumulative count of translation lookaside buffer (TLB) faults where the
valid page is in memory, but hardware valid bit is disabled to emulate
hardware reference bit.
@ irix.mem.tlb.sync count of TLB flushes on all processors
Cumulative count of translation lookaside buffer (TLB) flushes on all
processors.
@ irix.mem.tlb.tfault count of user page table or kernel virt addr TLB miss
Cumulative count of translation lookaside buffer (TLB) faults for user
page table or kernel virtual address translation faults, i.e. address
translation not resident in TLB
@ irix.mem.tlb.purge count of all-CPU TLB purge operations
Cumulative count of the number of times the translation lookaside
buffer (TLB) entries for a single process are purged from all CPUs.
@ irix.mem.tlb.idnew count of new TLB ids issued
Cumulative count of new translation lookaside buffer (TLB) ids issued.
@ irix.mem.tlb.idwrap count of TLB flushes because TLB ids have been depleted
Cumulative count of translation lookaside buffer (TLB) flushes caused
by depletion of TLB ids.
@ irix.mem.tlb.kvmwrap count of TLB flushes due to kernel vmem depletion
Cumulative count of translation lookaside buffer (TLB) flushes caused by
clean (with respect to TLB) kernel virtual memory depletion.
This is expected to occur rarely.
@ irix.mem.freeswap free swap space
Current (instantaneous) free swap space measured in Kbytes.
@ irix.mem.paging.reclaim pages reclaimed by the paging daemon
Cumulative count of pages reclaimed by the paging daemon.
@ irix.mem.halloc Number of times kernel heap allocation requested.
The number of times since boot the kernel has allocated memory in its heap.
This includes reallocation of existing blocks.
@ irix.mem.heapmem Total number of bytes in kernel heap
@ irix.mem.hfree Number of times memory freed in kernel heap
@ irix.mem.hovhd Number of bytes of overhead in kernel heap (heap headers etc.)
@ irix.mem.hunused Number of bytes of unallocated space in kernel heap
@ irix.mem.zfree
Number of zone_free requests made.
Not relevant in IRIX 6.5 and later, see irix.mem.hfree instead.
@ irix.mem.zonemem Current number of Kbytes in kernel zones
Current number of Kbytes in kernel zones. The kernel zones are fixed sized
memory allocators that use a high watermark.
Not relevant in IRIX 6.5 and later, see irix.mem.heapmem instead.
@ irix.mem.zreq Number of zone_alloc requests made
Number of zone_alloc requests made.
Not relevant in IRIX 6.5 and later, see irix.mem.halloc instead.
@ irix.mem.iclean Number of times I-cache flushed allocating a clean page
@ irix.mem.bsdnet Kbytes of memory currently in use by BSD networking
@ irix.mem.palloc Number of page allocation requests since boot
@ irix.mem.unmodfl Number of times getpages found unmodified pages in a file
@ irix.mem.unmodsw Number of times getpages found unmodified pages in swap
@ irix.mem.system.sptalloc allocated system page table entries
@ irix.mem.system.sptfree free system page table entries
@ irix.mem.system.sptclean clean system page table entries
@ irix.mem.system.sptdirty dirty system page table entries
@ irix.mem.system.sptintrans "in transit" system page table entries
@ irix.mem.system.sptaged aged system page table entries
@ irix.mem.system.sptbp system VM in buffer cache
@ irix.mem.system.sptheap system VM in kernel heap
@ irix.mem.system.sptzone system VM in kernel zones
@ irix.mem.system.sptpt system VM in page tables
@ irix.swap.pagesin cumulative pages swapped in
The cumulative count of the number of pages transferred in from all swap
devices since system boot time.
@ irix.swap.pagesout cumulative pages swapped out
The cumulative count of the number of pages transferred out to all swap
devices since system boot time.
@ irix.kernel.all.pswitch cumulative process switches
The cumulative number of process (context) switches that have occurred
since system boot time.
@ irix.swap.procout cumulative process swap outs
The cumulative number of process swap outs that have occurred since
system boot time.
@ irix.swap.in cumulative "swap in" transfers
The cumulative number of swap I/O transfers (reads) from all swap
devices since system boot time. Each transfer may involve one or more
pages (see also irix.swap.pagesin).
@ irix.swap.out cumulative "swap out" transfers
The cumulative number of swap I/O transfers (writes) to all swap
devices since system boot time. Each transfer may involve one or more
pages (see also irix.swap.pagesout).
@ irix.kernel.all.cpu.idle CPU idle time (summed over all processors)
A cumulative count of the number of milliseconds of CPU idle time, summed over
all processors.
Note that this metric is derived by point sampling the state of the currently
executing process(es) once per tick of the system clock.
@ irix.kernel.all.cpu.intr CPU interrupt time (summed over all processors)
A cumulative count of the number of milliseconds of CPU time spent processing
interrupts, summed over all processors.
Note that this metric is derived by point sampling the state of the currently
executing process once per tick of the system clock.
@ irix.kernel.all.cpu.sys CPU kernel time (summed over all processors)
A cumulative count of the number of milliseconds of CPU time spent executing
below the system call interface in the kernel (system mode), summed over
all processors.
Note that this metric is derived by point sampling the state of the currently
executing process once per tick of the system clock.
@ irix.kernel.all.cpu.sxbrk CPU time waiting for memory resources (summed over all processors)
A cumulative count of the number of milliseconds of CPU time spent idle
when there are processes blocked due to depleted memory resources and
there are no processes waiting for I/O.
Note that this metric is derived by point sampling the state of the currently
executing process once per tick of the system clock.
@ irix.kernel.all.cpu.user CPU user time (summed over all processors)
A cumulative count of the number of milliseconds of CPU time spent executing
above the system call interface in applications (user mode), summed over
all processors.
Note that this metric is derived by point sampling the state of the currently
executing process once per tick of the system clock.
@ irix.kernel.all.cpu.wait.total CPU wait time (summed over all processors)
A cumulative count of the number of milliseconds of CPU time spent waiting for
I/O, summed over all processors.
This metric is the sum of the other irix.kernel.all.cpu.wait.* metrics.
Note that this metric is derived by point sampling the state of the currently
executing process once per tick of the system clock.
@ irix.kernel.all.io.iget Number of inode lookups performed (summed over all processors)
A cumulative count of the number of inode lookups performed,
summed over all processors.
@ irix.kernel.all.readch Number of bytes transferred by the read() system call (summed over all processors)
A cumulative count of the number of bytes transferred by the read() system call,
summed over all processors.
@ irix.kernel.all.runocc Number of times the "run queue" is non-zero
At each "clock tick" if the number of runnable processes (i.e.
processes on the "run queue") for ANY processor is non-zero, this
counter is incremented by one.
@ irix.kernel.all.runque Cumulative length of the queue of runnable processes
At each "clock tick" the number of runnable processes (i.e. processes
on the "run queue") for EVERY processor is added to this counter.
Over two consecutive samples the "average" run queue length may be
computed as
if delta(irix.kernel.all.runocc) is zero
zero
else
delta(irix.kernel.all.runque) / delta(irix.kernel.all.runocc)
@ irix.kernel.all.swap.swpocc Number of times there are swapped processes
At each "clock tick" if the number of swapped processes is non-zero,
this counter is incremented by one.
@ irix.kernel.all.swap.swpque Cumulative length of the queue of swapped processes
At each "clock tick" the number of swapped processes is added to this
counter.
Over two consecutive samples the "average" swap queue length may be
computed as
if delta(irix.kernel.all.swpocc) is zero
zero
else
delta(irix.kernel.all.swpque) / delta(irix.kernel.all.swpocc)
@ irix.kernel.all.syscall Number of system calls made (summed over all processors)
A cumulative count of the number of system calls made,
summed over all processors.
@ irix.kernel.all.sysexec Number of exec() system calls made (summed over all processors)
A cumulative count of the number of exec() system calls made,
summed over all processors.
@ irix.kernel.all.sysfork Number of fork() system calls made (summed over all processors)
A cumulative count of the number of fork() system calls made,
summed over all processors.
@ irix.kernel.all.sysread Number of read() system calls made (summed over all processors)
A cumulative count of the number of read() system calls made,
summed over all processors.
@ irix.kernel.all.syswrite Number of write() system calls made (summed over all processors)
A cumulative count of the number of write() system calls made,
summed over all processors.
@ irix.kernel.all.sysother Number of "other" system calls made (summed over all processors)
A cumulative count of the number of system calls (other than read(),
write(), fork() and exec()) made, summed over all processors.
@ irix.kernel.all.cpu.wait.gfxc CPU graphics context switch wait time (summed over all processors)
A cumulative count of the number of milliseconds of CPU time spent waiting for
graphics context switches, summed over all processors.
Note that this metric is derived by point sampling the state of the currently
executing process once per tick of the system clock.
@ irix.kernel.all.cpu.wait.gfxf CPU graphics FIFO wait time (summed over all processors)
A cumulative count of the number of milliseconds of CPU time spent waiting on a
full graphics fifo, summed over all processors.
Note that this metric is derived by point sampling the state of the currently
executing process once per tick of the system clock.
@ irix.kernel.all.cpu.wait.io CPU filesystem I/O wait time (summed over all processors)
A cumulative count of the number of milliseconds of CPU time spent waiting for
filesystem I/O, summed over all processors.
Note that this metric is derived by point sampling the state of the currently
executing process once per tick of the system clock.
@ irix.kernel.all.cpu.wait.pio CPU physical (non-swap) I/O wait time (summed over all processors)
A cumulative count of the number of milliseconds of CPU time spent waiting for
non-swap I/O to complete, summed over all processors.
Note that this metric is derived by point sampling the state of the currently
executing process once per tick of the system clock.
@ irix.kernel.all.cpu.wait.swap CPU swap I/O wait time (summed over all processors)
A cumulative count of the number of milliseconds of CPU time spent waiting for
physical swap I/O to complete, summed over all processors.
Note that this metric is derived by point sampling the state of the currently
executing process once per tick of the system clock.
@ irix.kernel.all.writech Number of bytes transferred by the write() system call (summed over all processors)
@ irix.kernel.all.io.bread Total block I/O read throughput (K)
Cumulative amount of data read from block devices (Kilobytes)
@ irix.kernel.all.io.bwrite Total block I/O write throughput (K)
Cumulative amount of data written to block devices (Kilobytes)
@ irix.kernel.all.io.lread Total logical read throughput (K)
Cumulative amount of data read from system buffers into user memory (Kilobytes)
@ irix.kernel.all.io.lwrite Total logical write throughput (K)
Cumulative amount of data written from system buffers into user memory (Kilobytes)
@ irix.kernel.all.io.phread Total physical I/O read throughput (K)
Cumulative amount of data read via raw (physical) devices (Kilobytes)
@ irix.kernel.all.io.phwrite Total physical I/O write throughput (K)
Cumulative amount of data written via raw (physical) devices (Kilobytes)
@ irix.kernel.all.io.wcancel Total data not written due to cancelled writes (K)
Cumulative amount of data that was not written when pending writes were
cancelled (Kilobytes)
@ irix.kernel.all.io.namei Number of pathname lookups performed
The number of times pathnames have been translated to vnodes.
@ irix.kernel.all.io.dirblk Kilobytes of directory blocks scanned
Cumulative count of the number of kilobytes of directory blocks scanned.
@ irix.kernel.all.tty.recvintr Input interrupt count for serial devices
Cumulative number of input interrupts received for serial devices
@ irix.kernel.all.tty.xmitintr Output interrupt count for serial devices
Cumulative number of output interrupts transmitted for serial devices
@ irix.kernel.all.tty.mdmintr Modem control interrupt count for serial devices
Cumulative number of modem control interrupts processed for serial devices
@ irix.kernel.all.tty.out Count of characters output to serial devices
Cumulative number of characters output to serial devices.
@ irix.kernel.all.tty.raw Count of "raw" characters received on serial lines
Cumulative number of raw characters received on serial lines.
@ irix.kernel.all.tty.canon Count of "canonical" characters received by the tty driver
Cumulative number of canonical characters received by the tty driver.
@ irix.gfx.ioctl Count of graphics ioctl() operations
Cumulative number of graphics ioctl() operations performed.
@ irix.gfx.ctxswitch Count of graphics context switches
Cumulative number of graphics context switches performed.
@ irix.gfx.swapbuf Count of graphics swap buffers calls
Cumulative number of graphics swap buffer operations performed.
@ irix.gfx.intr Count of non-FIFO graphics interrupts
Cumulative number of non-FIFO graphics interrupts processed.
@ irix.gfx.fifonowait Count of graphics FIFO interrupts that don't block
Cumulative number of FIFO graphics interrupts processed that don't block.
@ irix.gfx.fifowait Count of graphics FIFO interrupts that block
Cumulative number of FIFO graphics interrupts processed that block.
@ irix.kernel.all.intr.vme Count of VME interrupts
Cumulative number of VME interrupts processed.
@ irix.kernel.all.intr.non_vme Count of non-VME interrupts
Cumulative number of non-VME interrupts processed.
@ irix.kernel.all.ipc.msg Count of System V message operations
Cumulative number of System V message operations performed.
@ irix.kernel.all.ipc.sema Count of System V semaphore operations
Cumulative number of System V semaphore operations performed.
@ irix.kernel.all.pty.masterch Count of characters sent to pty master devices
Cumulative number of characters sent to pty master devices.
@ irix.kernel.all.pty.slavech Count of characters sent to pty slave devices
Cumulative number of characters sent to pty slave devices.
@ irix.kernel.all.flock.alloc Total number of record locks allocated
Cumulative number of record locks allocated.
@ irix.kernel.all.flock.inuse Count of record locks currently in use
Cumulative number of record locks in use.
@ irix.xpc.kernel.all.cpu.idle High precision irix.kernel.all.cpu.idle
This is a higher precision version of irix.kernel.all.cpu.idle.
See help on irix.kernel.all.cpu.idle for more details.
@ irix.xpc.kernel.all.cpu.intr High precision irix.kernel.all.cpu.intr
This is a higher precision version of irix.kernel.all.cpu.intr.
See help on irix.kernel.all.cpu.intr for more details.
@ irix.xpc.kernel.all.cpu.sys High precision irix.kernel.all.cpu.sys
This is a higher precision version of irix.kernel.all.cpu.sys.
See help on irix.kernel.all.cpu.sys for more details.
@ irix.xpc.kernel.all.cpu.sxbrk High precision irix.kernel.all.cpu.sxbrk
This is a higher precision version of irix.kernel.all.cpu.sxbrk.
See help on irix.kernel.all.cpu.sxbrk for more details.
@ irix.xpc.kernel.all.cpu.user High precision irix.kernel.all.cpu.user
This is a higher precision version of irix.kernel.all.cpu.user.
See help on irix.kernel.all.cpu.user for more details.
@ irix.xpc.kernel.all.cpu.wait.total High precision irix.kernel.all.cpu.wait.total
This is a higher precision version of irix.kernel.all.cpu.wait.total.
See help on irix.kernel.all.cpu.wait.total for more details.
@ irix.xpc.kernel.all.cpu.wait.gfxc High precision irix.kernel.all.cpu.wait.gfxc
This is a higher precision version of irix.kernel.all.cpu.wait.gfxc.
See help on irix.kernel.all.cpu.wait.gfxc for more details.
@ irix.xpc.kernel.all.cpu.wait.gfxf High precision irix.kernel.all.cpu.wait.gfxf
This is a higher precision version of irix.kernel.all.cpu.wait.gfxf.
See help on irix.kernel.all.cpu.wait.gfxf for more details.
@ irix.xpc.kernel.all.cpu.wait.io High precision irix.kernel.all.cpu.wait.io
This is a higher precision version of irix.kernel.all.cpu.wait.io.
See help on irix.kernel.all.cpu.wait.io for more details.
@ irix.xpc.kernel.all.cpu.wait.pio High precision irix.kernel.all.cpu.wait.pio
This is a higher precision version of irix.kernel.all.cpu.wait.pio.
See help on irix.kernel.all.cpu.wait.pio for more details.
@ irix.xpc.kernel.all.cpu.wait.swap High precision irix.kernel.all.cpu.wait.swap
This is a higher precision version of irix.kernel.all.cpu.wait.swap.
See help on irix.kernel.all.cpu.wait.swap for more details.
@ irix.xpc.kernel.all.io.bread High precision irix.kernel.all.io.bread
This is a higher precision version of irix.kernel.all.io.bread.
See help on irix.kernel.all.io.bread for more details.
@ irix.xpc.kernel.all.io.bwrite High precision irix.kernel.all.io.bwrite
This is a higher precision version of irix.kernel.all.io.bwrite.
See help on irix.kernel.all.io.bwrite for more details.
@ irix.xpc.kernel.all.io.lread High precision irix.kernel.all.io.lread
This is a higher precision version of irix.kernel.all.io.lread.
See help on irix.kernel.all.io.lread for more details.
@ irix.xpc.kernel.all.io.lwrite High precision irix.kernel.all.io.lwrite
This is a higher precision version of irix.kernel.all.io.lwrite.
See help on irix.kernel.all.io.lwrite for more details.
@ irix.xpc.kernel.all.io.phread High precision irix.kernel.all.io.phread
This is a higher precision version of irix.kernel.all.io.phread.
See help on irix.kernel.all.io.phread for more details.
@ irix.xpc.kernel.all.io.phwrite High precision irix.kernel.all.io.phwrite
This is a higher precision version of irix.kernel.all.io.phwrite.
See help on irix.kernel.all.io.phwrite for more details.
@ irix.xpc.kernel.all.io.wcancel High precision irix.kernel.all.io.wcancel
This is a higher precision version of irix.kernel.all.io.wcancel.
See help on irix.kernel.all.io.wcancel for more details.
@ irix.xpc.kernel.all.io.dirblk High precision irix.kernel.all.io.dirblk
This is a higher precision version of irix.kernel.all.io.dirblk.
See help on irix.kernel.all.io.dirblk for more details.
@ irix.kernel.percpu.cpu.idle per processor idle CPU time
A count maintained for each processor, that accumulates the number
of milliseconds of CPU idle time. Note that this metric is derived
by point sampling the state of the currently executing process once
per tick of the system clock.
For single processor systems the one value is the same as for the metric
irix.kernel.all.cpu.idle.
@ irix.kernel.percpu.cpu.intr per processor interrupt CPU time
A count maintained for each processor, that accumulates the number
of milliseconds of CPU time spent processing interrupts.
Note that this metric is derived by point sampling the state of the
currently executing process once per tick of the system clock.
For single processor systems the one value is the same as for the metric
irix.kernel.all.cpu.intr.
@ irix.kernel.percpu.cpu.sys per processor CPU kernel time
A count maintained for each processor, that accumulates the number
of milliseconds of CPU time spent executing below the system call interface
in the kernel (system mode).
Note that this metric is derived by point sampling the state of the
currently executing process once per tick of the system clock.
For single processor systems the one value is the same as for the metric
irix.kernel.all.cpu.sys.
@ irix.kernel.percpu.cpu.sxbrk per processor time spent waiting for memory resources
A count maintained for each processor, that accumulates the number of
milliseconds spent idle when there are processes blocked due to
depleted memory resources and there are no processes waiting for I/O.
Note that this metric is derived by point sampling the
state of the currently executing process once per tick of the system
clock.
@ irix.kernel.percpu.cpu.user per processor user mode CPU time
A count maintained for each processor, that accumulates the number
of milliseconds of CPU time spent executing above the system call
interface in applications (user mode) on that processor.
Note that this metric is derived by point sampling the
state of the currently executing process once per tick of the system
clock.
For single processor systems the one value is the same as for the metric
irix.kernel.all.cpu.user.
@ irix.kernel.percpu.cpu.wait.total per processor total CPU wait time
A count maintained for each processor, that accumulates the number
of milliseconds of CPU time spent waiting for I/O, This metric is the
sum of the other irix.kernel.percpu.cpu.wait.* metrics.
Note that this metric is derived by point sampling the state of the
currently executing process once per tick of the system clock.
For single processor systems the one value is the same as for the metric
irix.kernel.all.cpu.wait.total.
@ irix.kernel.percpu.cpu.wait.gfxc per processor CPU graphics context switch wait time
A count maintained for each processor, that accumulates the number
of milliseconds of CPU time spent waiting for graphics context switches.
Note that this metric is derived by point sampling the state of the
currently executing process once per tick of the system clock.
For single processor systems the one value is the same as for the metric
irix.kernel.all.cpu.wait.gfxc.
@ irix.kernel.percpu.cpu.wait.gfxf per processor CPU graphics FIFO wait time
A count maintained for each processor, that accumulates the number
of milliseconds of CPU time spent waiting on a full graphics FIFO.
Note that this metric is derived by point sampling the state of the
currently executing process once per tick of the system clock.
For single processor systems the one value is the same as for the metric
irix.kernel.all.cpu.wait.gfxf.
@ irix.kernel.percpu.cpu.wait.io per processor CPU filesystem I/O wait time
A count maintained for each processor, that accumulates the number
of milliseconds of CPU time spent waiting for filesystem I/O.
Note that this metric is derived by point sampling the state of the
currently executing process once per tick of the system clock.
For single processor systems the one value is the same as for the metric
irix.kernel.all.cpu.wait.io.
@ irix.kernel.percpu.cpu.wait.pio per processor CPU physical (non-swap) I/O wait time
A count maintained for each processor, that accumulates the number
of milliseconds of CPU time spent waiting for non-swap I/O to complete.
Note that this metric is derived by point sampling the state of the
currently executing process once per tick of the system clock.
For single processor systems the one value is the same as for the metric
irix.kernel.all.cpu.wait.pio.
@ irix.kernel.percpu.cpu.wait.swap per processor CPU swap I/O wait time
A count maintained for each processor, that accumulates the number
of milliseconds of CPU time spent waiting for swap I/O to complete.
Note that this metric is derived by point sampling the state of the
currently executing process once per tick of the system clock.
For single processor systems the one value is the same as for the metric
irix.kernel.all.cpu.wait.swap.
@ irix.kernel.percpu.io.iget per processor number of inode lookups performed
A count maintained for each processor, that accumulates the number
of inode lookups performed.
For single processor systems the one value is the same as for the metric
irix.kernel.all.io.iget.
@ irix.kernel.percpu.io.bread per processor amount of data read from block devices
A count maintained for each processor, that accumulates the amount of
data read from block devices (Kilobytes).
For single processor systems the one value is the same as for the metric
irix.kernel.all.io.bread.
@ irix.kernel.percpu.io.bwrite per processor amount of data written to block devices
A count maintained for each processor, that accumulates the amount of
data written to block devices (Kilobytes).
For single processor systems the one value is the same as for the metric
irix.kernel.all.io.bwrite.
@ irix.kernel.percpu.io.lread Total logical read throughput (K)
A count maintained for each processor, that accumulates the amount of data read
from system buffers into user memory (Kilobytes)
For single processor systems the one value is the same as for the metric
irix.kernel.all.io.lread
@ irix.kernel.percpu.io.lwrite Total logical write throughput (K)
A count maintained for each processor, that accumulates the amount of data
written from system buffers into user memory (Kilobytes)
For single processor systems the one value is the same as for the metric
irix.kernel.all.io.lwrite
@ irix.kernel.percpu.io.phread Total physical I/O read throughput (K)
A count maintained for each processor, that accumulates the amount of data read
via raw (physical) devices (Kilobytes)
For single processor systems the one value is the same as for the metric
irix.kernel.all.io.phread
@ irix.kernel.percpu.io.phwrite Total physical I/O write throughput (K)
A count maintained for each processor, that accumulates the amount of data
written via raw (physical) devices (Kilobytes)
For single processor systems the one value is the same as for the metric
irix.kernel.all.io.phwrite
@ irix.kernel.percpu.io.wcancel Total data not written due to canceled writes (K)
A count maintained for each processor, that accumulates the amount of data that
was not written when pending writes were canceled (Kilobytes)
For single processor systems the one value is the same as for the metric
irix.kernel.all.io.wcancel
@ irix.kernel.percpu.io.namei Number of pathname lookups performed
A count maintained for each processor, that is the cumulative number of
of times pathnames have been translated to vnodes.
@ irix.kernel.percpu.io.dirblk Kilobytes of directory blocks scanned
A count maintained for each processor, that is the cumulative number of
kilobytes of directory blocks scanned.
@ irix.kernel.percpu.swap.swpocc Cumulative number of times swapped processes found to exist
A count maintained for each processor, that is the cumulative number of
times swapped processes were found to exist.
Note that this value is sampled and updated by the kernel only once per second.
@ irix.kernel.percpu.swap.swpque Cumulative sum of the length of the queue of swapped processes
A count maintained for each processor, that is the cumulative length
of the queue of swapped processes.
Note that this value is sampled and updated by the kernel only once per second.
@ irix.kernel.percpu.pswitch per processor cumulative process switches
A count maintained for each processor, that is the cumulative number of
process (context) switches that have occurred.
For single processor systems the one value is the same as for the metric
irix.kernel.all.pswitch.
@ irix.kernel.percpu.readch per processor number of bytes transferred by the read() system call
A count maintained for each processor, that is the cumulative number of
bytes transferred by the read() system call.
For single processor systems the one value is the same as for the metric
irix.kernel.all.readch.
@ irix.kernel.percpu.runocc per processor number of times the "run queue" is non-zero
At each "clock tick" if the number of runnable processes (i.e.
processes on the "run queue") for this processor is non-zero, one
instance of this counter is incremented by one.
For single processor systems the one value is the same as for the
metric irix.kernel.all.runocc.
@ irix.kernel.percpu.runque per processor cumulative length of the queue of runnable processes
At each "clock tick" the number of runnable processes (i.e. processes
on the "run queue") for this processor is added to one instance of this
counter.
Over two consecutive samples the "average" run queue length for
processor I may be computed as
if delta(irix.kernel.percpu.runocc[I]) is zero
zero
else
delta(irix.kernel.percpu.runque[I]) / delta(irix.kernel.percpu.runocc[I])
For single processor systems the one value is the same as for the
metric irix.kernel.all.runque.
Note that this value is sampled and updated by the kernel only once per second.
@ irix.kernel.percpu.syscall per processor number of system calls made
A count maintained for each processor, that accumulates the number of
system calls made.
For single processor systems the one value is the same as for the
metric irix.kernel.all.syscall.
@ irix.kernel.percpu.sysexec per processor number of exec() calls made
A count maintained for each processor, that accumulates the number of
exec() calls made.
For single processor systems the one value is the same as for the
metric irix.kernel.all.sysexec.
@ irix.kernel.percpu.sysfork per processor number of fork() calls made
A count maintained for each processor, that accumulates the number of
fork() calls made.
For single processor systems the one value is the same as for the
metric irix.kernel.all.sysfork.
@ irix.kernel.percpu.sysread per processor number of read() calls made
A count maintained for each processor, that accumulates the number of
read() calls made.
For single processor systems the one value is the same as for the metric
irix.kernel.all.sysread.
@ irix.kernel.percpu.syswrite per processor number of write() calls made
A count maintained for each processor, that accumulates the number of
write() calls made.
For single processor systems the one value is the same as for the metric
irix.kernel.all.syswrite.
@ irix.kernel.percpu.sysother per processor number of "other" system calls made
A count maintained for each processor, that accumulates the number of
system calls (other than read(), write(), fork() and exec()) made.
For single processor systems the one value is the same as for the metric
irix.kernel.all.sysother.
@ irix.kernel.percpu.writech per processor number of bytes transferred by the write() system call
A count maintained for each processor, that accumulates the number of
bytes transferred by the write() system call.
For single processor systems the one value is the same as for the metric
irix.kernel.all.writech.
@ irix.kernel.percpu.tty.recvintr per processor input interrupt count for serial devices
A count maintained for each processor, that accumulates the number of input
interrupts for serial devices
For single processor systems the one value is the same as for the metric
irix.kernel.all.tty.recvintr
@ irix.kernel.percpu.tty.xmitintr per processor output interrupt count for serial devices
A count maintained for each processor, that accumulates the number of output
interrupts for serial devices
For single processor systems the one value is the same as for the metric
irix.kernel.all.tty.xmitintr
@ irix.kernel.percpu.tty.mdmintr per processor modem control interrupt count for serial devices
A count maintained for each processor, that accumulates the number of modem
control interrupts for serial devices
For single processor systems the one value is the same as for the metric
irix.kernel.all.tty.mdmintr
@ irix.kernel.percpu.tty.out per processor count of characters output to serial devices
A count maintained for each processor, that accumulates the number of
characters output to serial devices
For single processor systems the one value is the same as for the metric
irix.kernel.all.tty.out
@ irix.kernel.percpu.tty.raw per processor count of "raw" characters received on serial lines
A count maintained for each processor, that accumulates the number of "raw"
characters received on serial lines
For single processor systems the one value is the same as for the metric
irix.kernel.all.tty.raw
@ irix.kernel.percpu.tty.canon per processor count of "canonical" characters received by the tty driver
A count maintained for each processor, that accumulates the number of
"canonical" characters received by the tty driver
For single processor systems the one value is the same as for the metric
irix.kernel.all.tty.canon
@ irix.kernel.percpu.intr.vme per processor count of VME interrupts
A count maintained for each processor, that accumulates the number of
VME interrupts processed.
For single processor systems the one value is the same as for the metric
irix.kernel.all.intr.vme.
@ irix.kernel.percpu.intr.non_vme per processor count of non-VME interrupts
A count maintained for each processor, that accumulates the number of
non-VME interrupts processed.
For single processor systems the one value is the same as for the metric
irix.kernel.all.intr.non_vme.
@ irix.kernel.percpu.ipc.msg per processor count of System V message operations
A count maintained for each processor, that accumulates the number of
System V message operations performed.
For single processor systems the one value is the same as for the metric
irix.kernel.all.ipc.msg.
@ irix.kernel.percpu.ipc.sema per processor count of System V semaphore operations
A count maintained for each processor, that accumulates the number of
System V semaphore operations performed.
For single processor systems the one value is the same as for the metric
irix.kernel.all.ipc.sema.
@ irix.kernel.percpu.pty.masterch per processor count of characters sent to pty master devices
A count maintained for each processor, that accumulates the number of
characters sent to pty master devices.
For single processor systems the one value is the same as for the metric
irix.kernel.all.pty.masterch.
@ irix.kernel.percpu.pty.slavech per processor count of characters sent to pty slave devices
A count maintained for each processor, that accumulates the number of
characters sent to pty slave devices.
For single processor systems the one value is the same as for the metric
irix.kernel.all.pty.slavech.
@ irix.kernel.percpu.flock.alloc per processor number of record locks allocated
A count maintained for each processor, that accumulates the number of
record locks allocated.
For single processor systems the one value is the same as for the metric
irix.kernel.all.flock.alloc.
@ irix.kernel.percpu.flock.inuse per processor number of record locks currently in use
For each processor, the number of record locks currently in use.
For single processor systems the one value is the same as for the metric
irix.kernel.all.flock.inuse.
@ irix.xpc.kernel.percpu.cpu.idle High precision irix.kernel.percpu.cpu.idle
This is a higher precision version of irix.kernel.percpu.cpu.idle.
See help on irix.kernel.percpu.cpu.idle for more details.
@ irix.xpc.kernel.percpu.cpu.intr High precision irix.kernel.percpu.cpu.intr
This is a higher precision version of irix.kernel.percpu.cpu.intr.
See help on irix.kernel.percpu.cpu.intr for more details.
@ irix.xpc.kernel.percpu.cpu.sys High precision irix.kernel.percpu.cpu.sys
This is a higher precision version of irix.kernel.percpu.cpu.sys.
See help on irix.kernel.percpu.cpu.sys for more details.
@ irix.xpc.kernel.percpu.cpu.sxbrk High precision irix.kernel.percpu.cpu.sxbrk
This is a higher precision version of irix.kernel.percpu.cpu.sxbrk.
See help on irix.kernel.percpu.cpu.sxbrk for more details.
@ irix.xpc.kernel.percpu.cpu.user High precision irix.kernel.percpu.cpu.user
This is a higher precision version of irix.kernel.percpu.cpu.user.
See help on irix.kernel.percpu.cpu.user for more details.
@ irix.xpc.kernel.percpu.cpu.wait.total High precision irix.kernel.percpu.cpu.wait.total
This is a higher precision version of irix.kernel.percpu.cpu.wait.total.
See help on irix.kernel.percpu.cpu.wait.total for more details.
@ irix.xpc.kernel.percpu.cpu.wait.gfxc High precision irix.kernel.percpu.cpu.wait.gfxc
This is a higher precision version of irix.kernel.percpu.cpu.wait.gfxc.
See help on irix.kernel.percpu.cpu.wait.gfxc for more details.
@ irix.xpc.kernel.percpu.cpu.wait.gfxf High precision irix.kernel.percpu.cpu.wait.gfxf
This is a higher precision version of irix.kernel.percpu.cpu.wait.gfxf.
See help on irix.kernel.percpu.cpu.wait.gfxf for more details.
@ irix.xpc.kernel.percpu.cpu.wait.io High precision irix.kernel.percpu.cpu.wait.io
This is a higher precision version of irix.kernel.percpu.cpu.wait.io.
See help on irix.kernel.percpu.cpu.wait.io for more details.
@ irix.xpc.kernel.percpu.cpu.wait.pio High precision irix.kernel.percpu.cpu.wait.pio
This is a higher precision version of irix.kernel.percpu.cpu.wait.pio.
See help on irix.kernel.percpu.cpu.wait.pio for more details.
@ irix.xpc.kernel.percpu.cpu.wait.swap High precision irix.kernel.percpu.cpu.wait.swap
This is a higher precision version of irix.kernel.percpu.cpu.wait.swap.
See help on irix.kernel.percpu.cpu.wait.swap for more details.
@ irix.xpc.kernel.percpu.io.bread High precision irix.kernel.percpu.io.bread
This is a higher precision version of irix.kernel.percpu.io.bread.
See help on irix.kernel.percpu.io.bread for more details.
@ irix.xpc.kernel.percpu.io.bwrite High precision irix.kernel.percpu.io.bwrite
This is a higher precision version of irix.kernel.percpu.io.bwrite.
See help on irix.kernel.percpu.io.bwrite for more details.
@ irix.xpc.kernel.percpu.io.lread High precision irix.kernel.percpu.io.lread
This is a higher precision version of irix.kernel.percpu.io.lread.
See help on irix.kernel.percpu.io.lread for more details.
@ irix.xpc.kernel.percpu.io.phread High precision irix.kernel.percpu.io.phread
This is a higher precision version of irix.kernel.percpu.io.phread.
See help on irix.kernel.percpu.io.phread for more details.
@ irix.xpc.kernel.percpu.io.phwrite High precision irix.kernel.percpu.io.phwrite
This is a higher precision version of irix.kernel.percpu.io.phwrite.
See help on irix.kernel.percpu.io.phwrite for more details.
@ irix.xpc.kernel.percpu.io.wcancel High precision irix.kernel.percpu.io.wcancel
This is a higher precision version of irix.kernel.percpu.io.wcancel.
See help on irix.kernel.percpu.io.wcancel for more details.
@ irix.xpc.kernel.percpu.io.dirblk High precision irix.kernel.percpu.io.dirblk
This is a higher precision version of irix.kernel.percpu.io.dirblk.
See help on irix.kernel.percpu.io.dirblk for more details.
@ irix.xpc.kernel.percpu.io.lwrite High precision irix.kernel.percpu.io.lwrite
This is a higher precision version of irix.kernel.percpu.io.lwrite.
See help on irix.kernel.percpu.io.lwrite for more details.
@ 1.2 Disk device instance domain
The disk device instance domain includes one entry for each configured
disk in the system. In this context, a disk is:
(a) a directly connected disk device
(b) the SCSI adapter and Logical Unit Number (LUN) associated with a
RAID device
(c) a directly connected CD-ROM device
The instance names are constructed using a truncated form of the device
names in /dev/rdsk stripped of the volume or partition information,
e.g. "dks2d1" for device 1 on controller 2, or "dks56d7l3" for device
7, LUN 3 on controller 56.
@ 1.8 Disk controller instance domain
The disk controller instance domain includes one entry for each configured
disk controller in the system.
The instance names are constructed using a truncated form of the device
names in /dev/rdsk stripped of the device name and the volume or
partition information, e.g. "dks2" for controller 2, or "dks56" for
controller 56.
@ irix.disk.dev.blktotal cumulative blocks transferred to or from a disk
The cumulative number of blocks transferred to or from a disk device, in
units of 512 byte blocks.
This metric is akin to the "blks/s" values reported by the -d option to
sar(1).
@ irix.disk.dev.blkread cumulative blocks read from a disk
The cumulative number of blocks read from a disk device, in units of
512 byte blocks.
This metric is akin to the expression "blks/s" - "wblks/s" using the
values reported by the -d option to sar(1).
@ irix.disk.dev.blkwrite cumulative blocks written to a disk
The cumulative number of blocks written to a disk device, in units of
512 byte blocks.
This metric is akin to the "wblks/s" values reported by the -d option
to sar(1).
@ irix.disk.dev.total cumulative transfers to or from a disk
The cumulative number of transfers (independent of transfer size) to or
from a disk device.
When converted to a rate, this is equivalent to "I/Os per second" or
IOPS.
This metric is akin to the "r+w/s" values reported by the -d option to
sar(1).
@ irix.disk.dev.read cumulative reads from a disk
The cumulative number of reads (independent of transfer size) from a
disk device.
This metric is akin to the expression "r+w/s" - "w/s" using the values
reported by the -d option to sar(1).
@ irix.disk.dev.write cumulative transfers to a disk
The cumulative number of writes (independent of transfer size) to a
disk device.
This metric is akin to the "w/s" values reported by the -d option to
sar(1).
@ irix.disk.dev.active cumulative disk active time
The cumulative number of milliseconds since system boot time that a
disk device has spent processing requests.
This metric has units of time (milliseconds) and semantics of counter
(it is incremented each time an I/O is completed). When displayed by
most PCP tools, metrics of this type are converted to time utilization
(sometimes expresses as a percentage). This should be interpreted as
the fraction of the sample time interval for which the disk was busy
handling requests, and is akin to the "%busy" values reported by the -d
option to sar(1).
Due to the asynchrony in the I/O start and stop time with respect to the
sample time, and the effects of multiple outstanding requests for a
single disk, utilizations of greater than 1.0 (or more than 100%) may
sometimes be observed.
@ irix.disk.dev.response cumulative disk response time
The cumulative I/O response time for a disk device expressed in
milliseconds since system boot time.
The I/O response time includes time spent in the queue of pending
requests plus the time the disk takes to handle the request (the latter
is accounted for by irix.disk.dev.active).
This metric has units of time (milliseconds) and semantics of counter
(it is incremented each time an I/O is completed). When displayed by
most PCP tools, metrics of this type are converted to time utilization
(sometimes expresses as a percentage). Due to the effects of multiple
accounting for the time spent in the queue when more than one request
is in the queue the values may be very much larger than 1.0 (or greater
than 100%) particularly when the arrival of I/O requests is subject to
"bursts", e.g. when the page cache is periodically flushed.
It is unlikely that any insight can be gained by reporting this metric
in isolation.
@ irix.disk.dev.bytes cumulative Kbytes transferred to or from a disk
The cumulative number of Kbytes transferred to or from a disk device.
Simply irix.disk.dev.blktotal divided by 2 to convert from units of 512
bytes to 1024 bytes.
@ irix.disk.dev.read_bytes cumulative Kbytes read from a disk
The cumulative number of Kbytes read from a disk device.
Simply irix.disk.dev.blkread divided by 2 to convert from units of 512
bytes to 1024 bytes.
@ irix.disk.dev.write_bytes cumulative Kbytes written to a disk
The cumulative number of Kbytes written to a disk device.
Simply irix.disk.dev.blkwrite divided by 2 to convert from units of 512
bytes to 1024 bytes.
@ irix.disk.ctl.avg_disk.active average disk active time on each controller
The average number of milliseconds since system boot time
that all disks attached to a particular controller have spent
processing requests. This is equivalent to irix.disk.ctl.active
divided by the number of disks on the controller.
Refer to the description of irix.disk.dev.active.
It is unlikely that any insight can be gained by reporting this metric
in isolation.
@ irix.disk.ctl.avg_disk.response average disk response time on each controller
The average I/O response time for all disks attached to a particular
controller. This is equivalent to irix.disk.ctl.response
divided by the number of disks on the controller.
Refer to the description of irix.disk.dev.response.
It is unlikely that any insight can be gained by reporting this metric
in isolation.
@ irix.disk.ctl.blktotal cumulative blocks transferred to or from a disk controller
The cumulative number of blocks transferred to or from all disk devices
attached to a particular controller, in units of 512 byte blocks.
@ irix.disk.ctl.blkread cumulative blocks read from a disk controller
The cumulative number of blocks read from all disk devices attached to
a particular controller, in units of 512 byte blocks.
@ irix.disk.ctl.blkwrite cumulative blocks written to a disk controller
The cumulative number of blocks written to all disk devices attached to
a particular controller, in units of 512 byte blocks.
@ irix.disk.ctl.total cumulative transfers to or from a disk controller
The cumulative number of transfers (independent of transfer size) to or
from all disk devices attached to a particular controller.
@ irix.disk.ctl.read cumulative reads from a disk controller
The cumulative number of reads (independent of transfer size) from all
disk devices attached to a particular controller.
@ irix.disk.ctl.write cumulative writes to a disk controller
The cumulative number of writes (independent of transfer size) to all
disk devices attached to a particular controller.
@ irix.disk.ctl.active cumulative disk controller active time
The cumulative number of milliseconds since system boot time
that all disks attached to a particular controller have spent
processing requests.
Refer to the description of irix.disk.dev.active.
It is unlikely that any insight can be gained by reporting this metric
in isolation.
@ irix.disk.ctl.response cumulative disk controller response time
The cumulative I/O response time for all disks attached to a particular
controller, expressed in milliseconds since system boot time.
Refer to the description of irix.disk.dev.response.
It is unlikely that any insight can be gained by reporting this metric
in isolation.
@ irix.disk.ctl.bytes cumulative Kbytes transferred to or from a disk controller
The cumulative number of Kbytes transferred to or from all disk devices
attached to a particular controller.
Simply irix.disk.ctl.blktotal divided by 2 to convert from units of 512
bytes to 1024 bytes.
@ irix.disk.ctl.read_bytes cumulative Kbytes read from a disk controller
The cumulative number of Kbytes read from all disk devices attached to
a particular controller.
Simply irix.ctl.dev.blkread divided by 2 to convert from units of 512
bytes to 1024 bytes.
@ irix.disk.ctl.write_bytes cumulative Kbytes written to a disk controller
The cumulative number of Kbytes written to all disk devices attached to
a particular controller.
Simply irix.ctl.dev.blkwrite divided by 2 to convert from units of 512
bytes to 1024 bytes.
@ irix.disk.all.blktotal cumulative blocks transferred to or from all disks
The cumulative number of blocks transferred to or from all disk
devices, in units of 512 byte blocks.
@ irix.disk.all.blkread cumulative blocks read from all disks
The cumulative number of blocks read from all disk devices, in units of
512 byte blocks.
@ irix.disk.all.blkwrite cumulative blocks written to all disks
The cumulative number of blocks written to all disk devices, in units
of 512 byte blocks.
@ irix.disk.all.total cumulative transfers to or from all disks
The cumulative number of transfers (independent of transfer size) to or
from all disk devices.
@ irix.disk.all.read cumulative reads from all disks
The cumulative number of reads (independent of transfer size) from all
disk devices.
@ irix.disk.all.write cumulative writes to all disks
The cumulative number of writes (independent of transfer size) to all
disk devices.
@ irix.disk.all.active cumulative disk controller active time
The cumulative number of milliseconds since system boot time that all
disks have spent processing requests.
Refer to the description of irix.disk.dev.active.
It is unlikely that any insight can be gained by reporting this metric
in isolation.
@ irix.disk.all.response cumulative disk controller response time
The cumulative I/O response time for all disks expressed in
milliseconds since system boot time.
Refer to the description of irix.disk.dev.response.
It is unlikely that any insight can be gained by reporting this metric
in isolation.
@ irix.disk.all.bytes cumulative Kbytes transferred to or from all disks
The cumulative number of Kbytes transferred to or from all disk
devices.
Simply irix.disk.all.blktotal divided by 2 to convert from units of 512
bytes to 1024 bytes.
@ irix.disk.all.read_bytes cumulative Kbytes read from all disks
The cumulative number of Kbytes read from all disk devices.
Simply irix.all.dev.blkread divided by 2 to convert from units of 512
bytes to 1024 bytes.
@ irix.disk.all.write_bytes cumulative Kbytes written to all disks
The cumulative number of Kbytes written to all disk devices.
Simply irix.all.dev.blkwrite divided by 2 to convert from units of 512
bytes to 1024 bytes.
@ irix.disk.all.avg_disk.active average disk active time
The average number of milliseconds since system boot time
that all disks have spent processing requests. This is equivalent to
irix.disk.all.active divided by the number of disks devices.
Refer to the description of irix.disk.dev.active.
It is unlikely that any insight can be gained by reporting this metric
in isolation.
@ irix.disk.all.avg_disk.response average disk response time
The average I/O response time for all disks. This is equivalent to
irix.disk.all.response divided by the number of disks devices.
Refer to the description of irix.disk.dev.response.
It is unlikely that any insight can be gained by reporting this metric
in isolation.
@ irix.xpc.disk.dev.read High precision irix.disk.dev.read
This is a higher precision version of irix.disk.dev.read.
See help on irix.disk.dev.read for more details.
@ irix.xpc.disk.dev.active High precision irix.disk.dev.active
This is a higher precision version of irix.disk.dev.active.
See help on irix.disk.dev.active for more details.
@ irix.xpc.disk.dev.blkread High precision irix.disk.dev.blkread
This is a higher precision version of irix.disk.dev.blkread.
See help on irix.disk.dev.blkread for more details.
@ irix.xpc.disk.dev.bytes High precision irix.disk.dev.bytes
This is a higher precision version of irix.disk.dev.bytes.
See help on irix.disk.dev.bytes for more details.
@ irix.xpc.disk.dev.read_bytes High precision irix.disk.dev.read_bytes
This is a higher precision version of irix.disk.dev.read_bytes.
See help on irix.disk.dev.read_bytes for more details.
@ irix.xpc.disk.dev.write_bytes High precision irix.disk.dev.write_bytes
This is a higher precision version of irix.disk.dev.write_bytes.
See help on irix.disk.dev.write_bytes for more details.
@ irix.xpc.disk.dev.response High precision irix.disk.dev.response
This is a higher precision version of irix.disk.dev.response.
See help on irix.disk.dev.response for more details.
@ hinv.map.disk path to a disk in the hardware graph
For IRIX versions that support the hardware graph (/hw and below),
the path to a disk in the hardware graph filesystem.
There is one string-valued instance of this metric for each disk in the
system.
@ hinv.nctl number of active disk controllers
The number of active disk controllers on this system.
@ hinv.ctl.ndisk number of disk spindles on each disk controller
The number of active disk spindles on each disk controller.
@ irix.rpc.client.badcalls cumulative total of bad client RPC requests
Cumulative total of bad client RPC requests processed
since nfsstats were last cleared.
@ irix.rpc.client.badxid cumulative total of bad xid client RPC requests
Cumulative total of bad client xid RPC requests processed
since nfsstats were last cleared.
@ irix.rpc.client.calls cumulative total of client RPC requests
Cumulative total of client RPC requests processed
since nfsstats were last cleared.
@ irix.rpc.client.newcred cumulative total of client RPC new credentials requests
Cumulative total of client RPC new credentials requests processed
since nfsstats were last cleared.
@ irix.rpc.client.retrans cumulative total of retransmitted client RPC requests
Cumulative total of retransmitted client RPC requests processed
since nfsstats were last cleared.
@ irix.rpc.client.timeout cumulative total of timed out client RPC requests
Cumulative total of timed out client RPC requests processed
since nfsstats were last cleared.
@ irix.rpc.client.wait cumulative total of client RPC handle waits
Cumulative total of client RPC handle waits processed
since nfsstats were last cleared.
@ irix.rpc.client.badverfs cumulative total of client RPC authentication errors
Cumulative total of client RPC authentication errors
(due to an invalid response from the server) since nfsstats
were last cleared.
@ irix.rpc.server.badcalls cumulative total of bad server RPC requests
Cumulative total of bad server RPC requests processed
since nfsstats were last cleared.
@ irix.rpc.server.badlen cumulative total of bad length server RPC requests
Cumulative total of bad length server RPC requests processed
since nfsstats were last cleared.
@ irix.rpc.server.calls cumulative total of server RPC requests
Cumulative total of server RPC requests processed
since nfsstats were last cleared.
@ irix.rpc.server.dupage average age of recycled RPC server cache entries
instantaneous average age of recycled RPC server cache entries
since nfsstats were last cleared.
@ irix.rpc.server.duphits cumulative total of server RPC duplicate cache hit requests
Cumulative total of server RPC duplicate cache hit requests processed
since nfsstats were last cleared.
@ irix.rpc.server.nullrecv cumulative total of null server RPC requests
Cumulative total of null server RPC requests processed
since nfsstats were last cleared.
@ irix.rpc.server.xdrcall cumulative total of xdr server RPC requests
Cumulative total of xdr server RPC requests processed
since nfsstats were last cleared.
@ 1.3 Instance domain PM_INDOM_NFSREQ for NFS request counts
The PM_INDOM_NFSREQ instance domain enumerates the 18 NFS request
operation types for both client and server requests. They are;
null getattr setattr root lookup readlink read wrcache write
create remove rename link symlink mkdir rmdir readdir fsstat
Instance identifiers correspond to the request index (in the order
above, from 0 .. 17 inclusive) and instance names are the name of each
request type.
@ irix.nfs.client.badcalls cumulative total of client NFS failures
Cumulative total of failed client NFS requests processed since nfsstats
were last cleared.
@ irix.nfs.client.calls cumulative total of client NFS requests
Cumulative total of client NFS requests processed since nfsstats were
last cleared.
@ irix.nfs.client.nclget cumulative total of client handle gets
Cumulative total of client handle gets processed since nfsstats were
last cleared.
@ irix.nfs.client.nclsleep cumulative total of client handle waits
Cumulative total of client handle waits processed since nfsstats were
last cleared.
@ irix.nfs.client.reqs cumulative total of client NFS requests by request type
Cumulative total of each type of client NFS request processed since
nfsstats were last cleared.
@ irix.nfs.server.badcalls cumulative total of server NFS failures
Cumulative total of failed server NFS requests processed
since nfsstats were last cleared.
@ irix.nfs.server.calls cumulative total of server NFS requests
Cumulative total of server NFS requests processed
since nfsstats were last cleared.
@ irix.nfs.server.reqs cumulative total of client NFS requests by request type
Cumulative total of each type of server NFS request
processed since nfsstats were last cleared.
@ 1.13 Instance domain PM_INDOM_NFS3REQ for NFS3 client request counts
The PM_INDOM_NFS3REQ instance domain enumerates the 22 NFS3 request
operation types for both client and server requests. They are;
null getattr setattr lookup access readlink read write create
mkdir symlink mkdir+ remove rmdir rename link readdir readdir+
fsstat fsinfo pathconf commit
Instance identifiers correspond to the request index (in the order
above, from 0 .. 21 inclusive) and instance names are the name of each
request type.
@ irix.nfs3.client.badcalls cumulative total of client NFS3 failures
Cumulative total of failed client NFS3 requests processed since nfsstats
were last cleared.
@ irix.nfs3.client.calls cumulative total of client NFS3 requests
Cumulative total of client NFS3 requests processed since nfsstats were
last cleared.
@ irix.nfs3.client.nclget cumulative total of client handle gets
Cumulative total of client handle gets processed since nfsstats were
last cleared.
@ irix.nfs3.client.nclsleep cumulative total of client handle waits
Cumulative total of client handle waits processed since nfsstats were
last cleared.
@ irix.nfs3.client.reqs cumulative total of client NFS3 requests by request type
Cumulative total of each type of client NFS3 request processed since
nfsstats were last cleared.
@ irix.nfs3.server.badcalls cumulative total of server NFS3 failures
Cumulative total of failed server NFS3 requests processed since nfsstats
were last cleared.
@ irix.nfs3.server.calls cumulative total of server NFS3 requests
Cumulative total of server NFS3 requests processed since nfsstats were
last cleared.
@ irix.nfs3.server.reqs cumulative total of client NFS3 requests by request type
Cumulative total of each type of server NFS3 request processed since
nfsstats were last cleared.
@ 1.4 Instance domain PM_INDOM_SWAP for logical swap areas
The PM_INDOM_SWAP instance domain enumerates logical swap areas.
Instance identifiers are logical swap numbers and instance names are
the pathnames for each swap file or device.
@ irix.swapdev.free physical swap free space
The amount of free swap space for each logical swap area.
@ irix.swapdev.length physical swap size
Physical length of each swap area.
@ irix.swapdev.maxswap maximum swap length
The maximum size the logical swap area will be grown to.
@ irix.swapdev.vlength virtual swap size
The size the system believes the swap area can hold.
@ irix.swapdev.priority swap resource priority
The swap resource priority level (a signed integer) for each logical
swap area.
@ irix.swap.free total physical swap free space
The amount of free swap space for all swap areas.
The same as the "free" column reported by "swap -l", subject
to conversion of units from blocks to Kbytes.
@ irix.swap.length total physical swap size
Total physical length of all swap areas.
The same as the "blocks" column reported by "swap -l", subject
to conversion of units from blocks to Kbytes.
@ irix.swap.maxswap aggregate of maximum swap sizes
The sum of the maximum size each logical swap area may be grown to.
The same as the "maxswap" column reported by "swap -l", subject
to conversion of units from blocks to Kbytes.
@ irix.swap.vlength total virtual swap size
Total length of virtual swap per logical swap area.
@ irix.swap.alloc total allocated logical swap size
Total logical swap allocation, including physical memory, real
swap and virtual swap.
The same as the "allocated" amount reported by "swap -s".
@ irix.swap.reserve total reserved logical swap size
Total logical swap reservation (in addition to the swap allocation),
including physical memory, real swap and virtual swap.
The same as the "add'l reserved" amount reported by "swap -s".
@ irix.swap.used total used logical swap size
Total logical swap used (equals irix.swap.alloc plus irix.swap.reserv),
including physical memory, real swap and virtual swap.
The same as the "used" amount reported by "swap -s".
@ irix.swap.unused total unused logical swap size
Total logical swap unused, including physical memory, real swap and
virtual swap.
The same as the "available" amount reported by "swap -s".
@ irix.network.icmp.error # of calls to icmp_error
@ irix.network.icmp.oldshort no error 'cuz old ip too short
@ irix.network.icmp.oldicmp no error 'cuz old was icmp
@ irix.network.icmp.badcode icmp_code out of range
@ irix.network.icmp.tooshort packet < ICMP_MINLEN
@ irix.network.icmp.checksum bad checksum
@ irix.network.icmp.badlen calculated bound mismatch
@ irix.network.icmp.reflect number of responses
@ irix.network.icmp.inhist.echoreply input histogram: echo reply
@ irix.network.icmp.inhist.unreach input histogram: destination unreachable
@ irix.network.icmp.inhist.sourcequench input histogram: packet lost, slow down
@ irix.network.icmp.inhist.redirect input histogram: shorter route
@ irix.network.icmp.inhist.echo input histogram: echo service
@ irix.network.icmp.inhist.routeradvert input histogram: router advertisement
@ irix.network.icmp.inhist.routersolicit input histogram: router solicitation
@ irix.network.icmp.inhist.timxceed input histogram: time exceeded
@ irix.network.icmp.inhist.paramprob input histogram: ip header bad
@ irix.network.icmp.inhist.tstamp input histogram: timestamp request
@ irix.network.icmp.inhist.tstampreply input histogram: timestamp reply
@ irix.network.icmp.inhist.ireq input histogram: information request
@ irix.network.icmp.inhist.ireqreply input histogram: information reply
@ irix.network.icmp.inhist.maskreq input histogram: address mask request
@ irix.network.icmp.inhist.maskreply input histogram: address mask reply
@ irix.network.icmp.outhist.echoreply output histogram: echo reply
@ irix.network.icmp.outhist.unreach output histogram: destination unreachable
@ irix.network.icmp.outhist.sourcequench output histogram: packet lost, slow down
@ irix.network.icmp.outhist.redirect output histogram: shorter route
@ irix.network.icmp.outhist.echo output histogram: echo service
@ irix.network.icmp.outhist.routeradvert output histogram: router advertisement
@ irix.network.icmp.outhist.routersolicit output histogram: router solicitation
@ irix.network.icmp.outhist.timxceed output histogram: time exceeded
@ irix.network.icmp.outhist.paramprob output histogram: ip header bad
@ irix.network.icmp.outhist.tstamp output histogram: timestamp request
@ irix.network.icmp.outhist.tstampreply output histogram: timestamp reply
@ irix.network.icmp.outhist.ireq output histogram: information request
@ irix.network.icmp.outhist.ireqreply output histogram: information reply
@ irix.network.icmp.outhist.maskreq output histogram: address mask request
@ irix.network.icmp.outhist.maskreply output histogram: address mask reply
@ irix.network.igmp.rcv_total total IGMP messages received
@ irix.network.igmp.rcv_tooshort messages received with too few bytes
@ irix.network.igmp.rcv_badsum messages received with bad checksum
@ irix.network.igmp.rcv_queries received membership queries
@ irix.network.igmp.rcv_badqueries received invalid queries
@ irix.network.igmp.rcv_reports received membership reports
@ irix.network.igmp.rcv_badreports received invalid reports
@ irix.network.igmp.rcv_ourreports received reports for our groups
@ irix.network.igmp.snd_reports sent membership reports
@ irix.network.ip.badhlen packets received with header length < data size
@ irix.network.ip.badlen packets received with data length < header length
@ irix.network.ip.badoptions packets received with bad options
@ irix.network.ip.badsum packets received with bad header checksum
@ irix.network.ip.cantforward packets received that are not forwardable
@ irix.network.ip.cantfrag packets received that can't be fragmented
@ irix.network.ip.delivered datagrams delivered to upper level (for this host)
@ irix.network.ip.forward packets forwarded
@ irix.network.ip.fragdropped fragments dropped (dup or out of space)
@ irix.network.ip.fragmented datagrams successfully fragmented
@ irix.network.ip.fragments fragments received
@ irix.network.ip.fragtimeout fragments dropped after timeout
@ irix.network.ip.localout total packets sent from this host
@ irix.network.ip.noproto packets received for unknown/unsupported protocol
@ irix.network.ip.noroute output packets discarded due to no route
@ irix.network.ip.odropped output packets dropped due to no bufs, etc.
@ irix.network.ip.ofragments output fragments created
@ irix.network.ip.reassembled total packets reassembled ok
@ irix.network.ip.redirect packets forwarded on same net (redirects sent)
@ irix.network.ip.tooshort packets received with data size < data length
@ irix.network.ip.toosmall packets received with size smaller than minimum
@ irix.network.ip.badvers packets received with IP version not equal to 4
@ irix.network.ip.rawout total raw IP packets generated
@ irix.network.ip.total total packets received
@ irix.network.tcp.connattempt connection requests
@ irix.network.tcp.accepts connections accepted
@ irix.network.tcp.connects connections established (including accepts)
@ irix.network.tcp.drops connections dropped
@ irix.network.tcp.conndrops embryonic connections dropped
@ irix.network.tcp.closed conn. closed (includes drops)
@ irix.network.tcp.segstimed segments attempted to update rtt
@ irix.network.tcp.rttupdated segments successfully updated rtt
@ irix.network.tcp.delack delayed ack-only packets sent
@ irix.network.tcp.timeoutdrop connections dropped by rexmit timeout
@ irix.network.tcp.rexmttimeo retransmit timeouts
@ irix.network.tcp.persisttimeo persist timeouts
@ irix.network.tcp.keeptimeo keepalive timeouts
@ irix.network.tcp.keepprobe keepalive probes sent
@ irix.network.tcp.keepdrops connections dropped in keepalive
@ irix.network.tcp.sndtotal total packets sent
@ irix.network.tcp.sndpack data packets sent
@ irix.network.tcp.sndbyte data bytes sent
@ irix.network.tcp.sndrexmitpack data packets retransmitted
@ irix.network.tcp.sndrexmitbyte data bytes retransmitted
@ irix.network.tcp.sndacks ack-only packets sent
@ irix.network.tcp.sndprobe window probes packets sent
@ irix.network.tcp.sndurg URG only packets sent
@ irix.network.tcp.sndwinup window update packets sent
@ irix.network.tcp.sndctrl control (SYN|FIN|RST) packets sent
@ irix.network.tcp.sndrst packets with RST sent
@ irix.network.tcp.rcvtotal total packets received
@ irix.network.tcp.rcvpack packets received in sequence
@ irix.network.tcp.rcvbyte bytes received in sequence
@ irix.network.tcp.rcvbadsum packets discarded for bad checksums
@ irix.network.tcp.rcvbadoff packets discarded for bad header offset fields
@ irix.network.tcp.rcvshort packets discarded because packet too short
@ irix.network.tcp.rcvduppack completely duplicate packets received
@ irix.network.tcp.rcvdupbyte bytes of completely duplicate packet received
@ irix.network.tcp.rcvpartduppack packets with some duplicate data
@ irix.network.tcp.rcvpartdupbyte duplicated bytes in packets with some duplicate data
@ irix.network.tcp.rcvoopack out-of-order packets received
@ irix.network.tcp.rcvoobyte out-of-order bytes received
@ irix.network.tcp.rcvpackafterwin packets received with data after window
@ irix.network.tcp.rcvbyteafterwin bytes received of packets with data after window
@ irix.network.tcp.rcvafterclose packets received after close
@ irix.network.tcp.rcvwinprobe window probe packets received
@ irix.network.tcp.rcvdupack duplicate acks received
@ irix.network.tcp.rcvacktoomuch acks received for unsent data
@ irix.network.tcp.rcvackpack ack packets received
@ irix.network.tcp.rcvackbyte bytes acked by received acks
@ irix.network.tcp.rcvwinupd window update packets received
@ irix.network.tcp.pcbcachemiss input packets missing pcb cache
@ irix.network.tcp.predack ack predictions ok
@ irix.network.tcp.preddat in-sequence predictions ok
The number of input data packets received in-sequence, with
nothing in the reassembly queue and sufficient buffer space
available to receive the packet.
@ irix.network.tcp.pawsdrop segments discarded because of old timestamp
@ irix.network.tcp.badsyn bad connection attempts
@ irix.network.tcp.listendrop listen queue overflows
@ irix.network.tcp.persistdrop connections dropped by persist timeout
@ irix.network.tcp.synpurge drops from listen queue
@ irix.network.udp.ipackets total packets received
@ irix.network.udp.hdrops packets received, packet shorter than header
@ irix.network.udp.badsum packets received with checksum error
@ irix.network.udp.badlen packets received, data length larger than packet
@ irix.network.udp.noport packets received, dropped due to no socket on port
@ irix.network.udp.noportbcast packets received as broadcast, dropped due to no socket on port
@ irix.network.udp.fullsock packets received, not delivered due to input socket full
@ irix.network.udp.opackets total output packets
@ irix.network.udp.pcbcachemiss input packets missing pcb cache
@ irix.network.mbuf.alloc allocated mbufs obtained from page pool
@ irix.network.mbuf.typealloc allocated mbufs by mbuf type
@ irix.network.mbuf.clustalloc allocated mbuf clusters obtained from page pool
@ irix.network.mbuf.clustfree free mbuf clusters
@ irix.network.mbuf.failed times failed to find mbuf space
@ irix.network.mbuf.waited times waited for mbuf space
@ irix.network.mbuf.drained times drained protocols for mbuf space
@ irix.network.mcr.mfc_lookups forwarding cache hash table hits
The number of forwarding cache hash table hits.
This metric is exported from kna.mrtstat.mrts_mfc_lookups in sys/tcpipstats.h.
@ irix.network.mcr.mfc_misses forwarding cache hash table misses
The number of forwarding cache hash table misses.
This metric is exported from kna.mrtstat.mrts_mfc_misses in sys/tcpipstats.h.
@ irix.network.mcr.upcalls calls to mrouted
The number of calls to mrouted.
This metric is exported from kna.mrtstat.mrts_upcalls in sys/tcpipstats.h.
@ irix.network.mcr.no_route no route to packet origin
The number of multicast packets with no route to their origin.
This metric is exported from kna.mrtstat.mrts_no_route in sys/tcpipstats.h.
@ irix.network.mcr.bad_tunnel malformed tunnel options
The number of multicast packets with malformed tunnel options.
This metric is exported from kna.mrtstat.mrts_bad_tunnel in sys/tcpipstats.h.
@ irix.network.mcr.cant_tunnel no room for tunnel options
The number of multicast packets that could not be tunneled due to lack
of space for tunnel options.
This metric is exported from kna.mrtstat.mrts_cant_tunnel in sys/tcpipstats.h.
@ irix.network.mcr.wrong_if packets arrived on the wrong interface
The number of multicast packets that have arrived on the wrong interface.
This metric is exported from kna.mrtstat.mrts_wrong_if in sys/tcpipstats.h.
@ irix.network.mcr.upq_ovflw queue to mrouted overflowed
The number of overflows in the queue to mrouted.
The number of forwarding cache hash table misses.
This metric is exported from kna.mrtstat.mrts_upq_ovflw in sys/tcpipstats.h.
@ irix.network.mcr.cache_cleanups table extries not requiring mrouted
The number of hash table entries that do not require mrouted.
This metric is exported from kna.mrtstat.mrts_cache_cleanups in
sys/tcpipstats.h.
@ irix.network.mcr.drop_sel multicast packets dropped selectively
The number of multicast packets that have been selectively dropped.
This metric is exported from kna.mrtstat.mrts_drop_sel in sys/tcpipstats.h.
@ irix.network.mcr.q_overflow multicast packets dropped when overflowed
The number of multicast packets dropped due to the mrouted queue overflowing.
This metric is exported from kna.mrtstat.mrts_q_overflow in sys/tcpipstats.h.
@ irix.network.mcr.pkt2large multicast packets dropped due to size
The number of multicast packets dropped because their size was larger than
the bucket size.
This metric is exported from kna.mrtstat.mrts_pkt2large in sys/tcpipstats.h.
@ irix.network.mcr.upq_sockfull dropped mrouted calls due to full socket
The number of calls to mrouted that were dropped due to the socket being
full.
This metric is exported from kna.mrtstat.mrts_upq_sockfull in sys/tcpipstats.h.
@ irix.mem.freemem Cumulative free user memory
Cumulative Kbytes of free memory available to user processes.
This metric is exported from rminfo.freemem in sys/sysmp.h and
is equivalent to System Memory->free in osview.
@ irix.kernel.all.load Smoothed load averages
Smoothed system load average over 1, 5 and 15 minute intervals.
@ irix.kernel.all.users Number of user processes
The total number of user processes. Init, login and zombie processes are not
included.
This metric is fetched using setutent(3) and getutent(3) system calls.
@ irix.mem.availsmem Amount of free virtual swap
The available real and swap memory in Kbytes.
This metric is exported from rminfo.availsmem in sys/sysmp.h and
is equivalent to System Memory->vswap in osview.
@ irix.mem.availrmem Amount of free real memory
The available real memory in Kbytes.
This metric is exported from rminfo.availrmem in sys/sysmp.h and
is used in osview for calculating System Memory->kernel and
System Memory->userdata.
@ irix.mem.bufmem Amount of memory holding filesystem meta-data
The amount of memory in Kbytes attached to the filesystem meta-data cache.
This metric is exported from rminfo.bufmem in sys/sysmp.h and
is equivalent to System Memory->fs ctl and used to calculate
System Memory->kernel in osview.
@ irix.mem.physmem Physical memory size
Total physical memory in Kbytes.
This metric is exported from rminfo.physmem in sys/sysmp.h and
is equivalent to System Memory->Phys in osview.
@ irix.mem.dchunkpages Amount of memory holding modified file data
The amount of memory in Kbytes holding modified filesystem file data, not
including dirty, unattached pages.
This metric is exported from rminfo.dchunkpages in sys/sysmp.h and
is used in osview for calculating System Memory->delwri.
@ irix.mem.pmapmem Amount of memory used in process map
The amount of memory in Kbutes used in the process map by the kernel.
This metric is exported from rminfo.pmapmem in sys/sysmp.h and
is equivalent to System Memory->ptbl in osview.
@ irix.mem.strmem Amount of memory used for streams
The amount of heap in Kbytes used for stream resources.
This metric is exported from rminfo.strmem in sys/sysmp.h and
is equivalent to System Memory->stream in osview.
@ irix.mem.chunkpages Amount of memory holding file data
The amount of memory in Kbytes holding filesystem file data, not including
dirty, unattached pages.
This metric is exported from rminfo.chunkpages in sys/sysmp.h and
is used in osview for calculating System Memory->fs data and
System Memory->userdata.
@ irix.mem.dpages Amount of memory holding dirty pages
The amount of memory in Kbytes holding dirty filesystem pages.
These are pages that have been pushed to the vnode list but is disjoint
from irix.mem.chunkpages.
This metric is exported from rminfo.dpages in sys/sysmp.h and
is used in osview for calculating System Memory->fs data,
System Memory->delwri and System Memory->userdata.
@ irix.mem.emptymem Amount of free memory not caching data
The amount of free memory in Kbytes not caching data.
This metric is exported from rminfo.emptymem in sys/sysmp.h.
@ irix.mem.util.kernel Amount of memory used by the kernel
The amount of memory in Kbytes that is consumed by kernel text and data.
This metric is derived using the equation:
irix.mem.physmem - (irix.mem.availrmem + irix.mem.bufmem)
and is equivalent to System Memory->Kernel in osview and gr_osview.
@ irix.mem.util.fs_ctl Amount of memory holding file system meta-data
The amount of memory in Kbytes that is holding file system meta-data.
This metric is equivalent to irix.mem.bufmem and System Memory->fs ctl in
osview and gr_osview.
@ irix.mem.util.fs_dirty Amount of memory holding file system data
The amount of memory in Kbytes that is holding file system data.
This metric is derived using the equation:
irix.mem.dchunkpages + irix.mem.dpages
and is equivalent to System Memory->fs data in osview and Memory->fs dirty in
gr_osview.
@ irix.mem.util.fs_clean Amount of clean memory held in file system cache
The amount of memory in Kbytes that is held in the file system chunk/buff
cache that is clean.
This metric is derived using the equation:
irix.mem.chunkpages - irix.mem.dchunkpages
and is equivalent to Memory->fs_clean in gr_osview.
@ irix.mem.util.free Cumulative free user memory
Cumulative Kbytes of free memory available to user processes.
This metric is equivalent to irix.mem.freemem and System Memory->free in
osview and gr_osview.
@ irix.mem.util.user Amount of memory used by user processes
The amount of memory in Kbytes used by active user processes.
This metric is derived using the equation:
irix.mem.availrmem-(irix.mem.chunkpages+irix.mem.dpages+irix.mem.freemem)
and is equivalent to System Memory->userdata in osview and Memory->user in
gr_osview.
@ irix.resource.nproc maximum number of processes
Maximum number of processes. This is determined by and equal to the
size of the kernel process table.
@ irix.resource.nbuf number of buffers in disk buffer cache
@ irix.resource.hbuf number of hash buckets for disk buffer cache
@ irix.resource.syssegsz max pages of dynamic system memory
@ irix.resource.maxpmem maximum physical memory to use
Maximum physical memory to use. If irix.resource.maxpmem is 0, then use
all available physical memory (see hinv.physmem), otherwise, value is
amount of mem to use specified in pages.
@ irix.resource.maxdmasz maximum unbroken dma transfer size
@ irix.resource.dquot maximum number of file system quota structures
@ irix.resource.nstream_queue Number of streams queues
@ irix.resource.nstream_head Number of streams head structures
@ irix.resource.fileovf file table overflows
Number of times kernel failed to allocate a file table entry.
@ irix.resource.procovf process table overflows
Number of times a new process could not be created due to lack of
space in process table.
@ irix.network.interface.collisions count of collisions on CSMA network interface
@ irix.network.interface.mtu maximum transmission unit on network interface
@ irix.network.interface.noproto packets destined for unsupported protocol on network interface [MIB-II]
@ irix.network.interface.baudrate linespeed on network interface [MIB-II]
@ irix.network.interface.in.errors count of input errors on network interface
@ irix.network.interface.in.packets count of packets received on network interface
@ irix.network.interface.in.bytes total number of octets received on network interface [MIB-II]
@ irix.network.interface.in.mcasts packets received via broad/multicast on network interface [MIB-II]
@ irix.network.interface.in.drops packets dropped during input on network interface [MIB-II]
@ irix.network.interface.out.errors count of output errors on network interface
@ irix.network.interface.out.packets count of packets sent on network interface
@ irix.network.interface.out.bytes total number of octets sent on network interface [MIB-II]
@ irix.network.interface.out.mcasts packets sent via broad/multicast on network interface [MIB-II]
@ irix.network.interface.out.drops number of packets dropped due to full output queue on network interface
@ irix.network.interface.out.qdrops output packets discarded w/o error on network interface [MIB-II]
@ irix.network.interface.out.qlength number of packets currently in output queue on network interface
@ irix.network.interface.out.qmax maximum length of output queue on network interface
@ irix.network.interface.total.errors total errors on network interface
@ irix.network.interface.total.packets total packets sent and received on network interface
@ irix.network.interface.total.bytes total octets sent and received on network interface [MIB-II]
@ irix.network.interface.total.mcasts total packets sent and received via broad/multicast on network interface [MIB-II]
@ irix.network.interface.total.drops total packets dropped on network interface [MIB-II]
@ irix.xpc.network.interface.in.bytes High precision irix.network.interface.in.bytes
This is a higher precision version of irix.network.interface.in.bytes.
See help on irix.network.interface.in.bytes for more details.
@ irix.xpc.network.interface.out.bytes High precision irix.network.interface.out.bytes
This is a higher precision version of irix.network.interface.out.bytes.
See help on irix.network.interface.out.bytes for more details.
@ irix.xpc.network.interface.total.bytes High precision irix.network.interface.total.bytes
This is a higher precision version of irix.network.interface.total.bytes.
See help on irix.network.interface.total.bytes for more details.
@ irix.resource.name_cache.hits hits that we can really use
@ irix.resource.name_cache.misses cache misses
@ irix.resource.name_cache.enters number of enters done
@ irix.resource.name_cache.dbl_enters number of enters tried when already cached
@ irix.resource.name_cache.long_enter long names tried to enter
@ irix.resource.name_cache.long_look long names tried to look up
@ irix.resource.name_cache.lru_empty LRU list empty
@ irix.resource.name_cache.purges number of purges of cache
@ irix.resource.name_cache.vfs_purges number of filesystem purges
@ irix.resource.name_cache.removes number of removals by name
@ irix.resource.name_cache.searches number of hash lookups
@ irix.resource.name_cache.stale_hits hits that found old vnode stamp
@ irix.resource.name_cache.steps hash chain steps for all searches
@ irix.resource.buffer_cache.getblks # getblks
@ irix.resource.buffer_cache.getblockmiss # times b_lock was missed
@ irix.resource.buffer_cache.getfound # times buffer found in cache
@ irix.resource.buffer_cache.getbchg # times buffer changed while waiting
@ irix.resource.buffer_cache.getloops # times back to top of getblk
@ irix.resource.buffer_cache.getfree # times fell through to freelist code
@ irix.resource.buffer_cache.getfreeempty # times freelist empty
@ irix.resource.buffer_cache.getfreehmiss # times couldn't get old hash
@ irix.resource.buffer_cache.getfreehmissx # times couldn't get old hash 20x
@ irix.resource.buffer_cache.getfreealllck # times all free bufs were locked
@ irix.resource.buffer_cache.getfreedelwri # times first free buf was DELWRI
@ irix.resource.buffer_cache.flush # times flushing occurred
@ irix.resource.buffer_cache.flushloops # times flushing looped
@ irix.resource.buffer_cache.getfreeref # times first free buf was ref
@ irix.resource.buffer_cache.getfreerelse # times first free buf had relse
@ irix.resource.buffer_cache.getoverlap # times overlapping buffer found
@ irix.resource.buffer_cache.clusters # times clustering attempted
@ irix.resource.buffer_cache.clustered # clustered buffers
@ irix.resource.buffer_cache.getfrag # page fragments read
@ irix.resource.buffer_cache.getpatch # partial buffers patched
@ irix.resource.buffer_cache.trimmed # of buffers made smaller
@ irix.resource.buffer_cache.inserts chunk inserts
@ irix.resource.buffer_cache.irotates rotates during inserts
@ irix.resource.buffer_cache.deletes chunk deletes
@ irix.resource.buffer_cache.drotates rotates during inserts
@ irix.resource.buffer_cache.decomms chunk decommissions
@ irix.resource.buffer_cache.flush_decomms chunk decommissions that flushed
@ irix.resource.buffer_cache.delrsv delalloc_reserve calls
@ irix.resource.buffer_cache.delrsvfree reserved without tossing
@ irix.resource.buffer_cache.delrsvclean tossed clean buffer
@ irix.resource.buffer_cache.delrsvdirty tossed dirty buffer
@ irix.resource.buffer_cache.delrsvwait waited for buffer to be freed
@ irix.resource.vnodes.vnodes total # vnodes, target
total # vnodes, target
@ irix.resource.vnodes.extant total # vnodes currently allocated
total # vnodes currently allocated
@ irix.resource.vnodes.active # vnodes not on free lists
@ irix.resource.vnodes.alloc # times vn_alloc called
@ irix.resource.vnodes.aheap # times alloc from heap
@ irix.resource.vnodes.afree # times alloc from free list
@ irix.resource.vnodes.afreeloops # times pass on free list vnode
@ irix.resource.vnodes.get # times vn_get called
@ irix.resource.vnodes.gchg vn_get called, vnode changed
@ irix.resource.vnodes.gfree vn_get called, on free list
@ irix.resource.vnodes.rele # times vn_rele called
@ irix.resource.vnodes.reclaim # times vn_reclaim called
@ irix.resource.vnodes.destroy # times vnode struct removed
@ irix.resource.vnodes.afreemiss # times missed on free list search
[Not available before Irix 5.3]
@ irix.resource.efs.attempts # calls to iget()
@ irix.resource.efs.found found in hash list
@ irix.resource.efs.frecycle found but was recycled before lock
@ irix.resource.efs.missed # times missed - alloc new
@ irix.resource.efs.dup someone else placed inode on list that we were
@ irix.resource.efs.reclaims # calls to ireclaim
@ irix.resource.efs.itobp # calls to efs_itobp
@ irix.resource.efs.itobpf # calls to efs_itobp that found cached bp
@ irix.resource.efs.iupdat # calls to efs_iupdat
@ irix.resource.efs.iupacc # calls to efs_iupdat for IACC
@ irix.resource.efs.iupupd # calls to efs_iupdat for IUPD
@ irix.resource.efs.iupchg # calls to efs_iupdat for ICHG
@ irix.resource.efs.iupmod # calls to efs_iupdat for IMOD
@ irix.resource.efs.iupunk # calls to efs_iupdat for ??
@ irix.resource.efs.iallocrd EFS breads for ialloc
Number of times bread() called during the search for a new EFS inode.
@ irix.resource.efs.iallocrdf EFS breads for ialloc in buf cache
Number of times bread() called during the search for a new EFS inode,
and the requested block was found in the buffer cache, rather than
causing a physical read.
@ irix.resource.efs.ialloccoll # times file create collided
@ irix.resource.efs.bmaprd bmap reads
@ irix.resource.efs.bmapfbm bmap reads found in bm cache
@ irix.resource.efs.bmapfbc bmap reads found in buf cache
@ irix.resource.efs.dirupd EFS directory updates
Number of times an EFS directory is physically re-written. Directories
are re-written as a result of initializing a new directory, creating a
new file, renaming a file, or unlinking a file.
@ irix.resource.efs.truncs # truncates that do something
@ irix.resource.efs.icreat EFS inode creations
Number of times an inode is created in an EFS filesystem.
Also the number of times efs_icreate() is called.
@ irix.resource.efs.attrchg inode updated cause attrs chged
@ irix.resource.efs.readcancel reads canceled in efs_strategy
@ hinv.nfilesys Number of mounted EFS and XFS filesystems
@ irix.filesys.capacity Total capacity of mounted filesystem (Kbytes)
@ irix.filesys.used Total space used on mounted filesystem (Kbytes)
@ irix.filesys.free Total space free on mounted filesystem (Kbytes)
@ irix.filesys.maxfiles Inodes capacity of mounted filesystem
@ irix.filesys.usedfiles Number of inodes allocated on mounted filesystem
@ irix.filesys.freefiles Number of unallocated inodes on mounted filesystem
@ irix.filesys.mountdir file system mount point
@ irix.filesys.full percentage of filesystem in use
@ irix.ipc.shm.segsz size of shared memory segment
@ irix.ipc.shm.nattch reference count for shared memory segment
@ irix.ipc.sem.nsems number of semaphores in semaphore set
@ irix.ipc.sem.ncnt number of waiters for semaphore to increase in value
@ irix.ipc.sem.zcnt number of waiters for semaphore to become zero
@ irix.ipc.msg.cbytes number of bytes in message queue
@ irix.ipc.msg.qnum number of messages in message queue
@ irix.ipc.msg.qbytes maximum number of bytes allowed in message queue
@ irix.xfs.allocx XFS extents allocated
Number of file system extents allocated over all XFS filesystems
@ irix.xfs.allocb XFS blocks allocated
Number of file system blocks allocated over all XFS filesystems
@ irix.xfs.freex XFS extents freed
Number of file system extents freed over all XFS filesystems
@ irix.xfs.freeb XFS blocks freed
Number of file system blocks freed over all XFS filesystems
@ irix.xfs.abt_lookup lookups in XFS alloc btrees
Number of lookup operations in XFS filesystem allocation btrees
@ irix.xfs.abt_compare compares in XFS alloc btrees
Number of compares in XFS filesystem allocation btree lookups
@ irix.xfs.abt_insrec insertions in XFS alloc btrees
Number of extent records inserted into XFS filesystem allocation btrees
@ irix.xfs.abt_delrec deletions in XFS alloc btrees
Number of extent records deleted from XFS filesystem allocation btrees
@ irix.xfs.blk_mapr block map read ops in XFS
Number of block map for read operations performed on XFS files
@ irix.xfs.blk_mapw block map write ops in XFS
Number of block map for write operations performed on XFS files
@ irix.xfs.blk_unmap block unmap ops in XFS
Number of block unmap (delete) operations performed on XFS files
@ irix.xfs.add_exlist extent list add ops in XFS
Number of extent list insertion operations for XFS files
@ irix.xfs.del_exlist extent list delete ops in XFS
Number of extent list deletion operations for XFS files
@ irix.xfs.look_exlist extent list lookup ops in XFS
Number of extent list lookup operations for XFS files
@ irix.xfs.cmp_exlist extent list compare ops in XFS
Number of extent list comparisons in XFS extent list lookups
@ irix.xfs.bmbt_lookup block map btree lookup ops in XFS
Number of block map btree lookup operations on XFS files
@ irix.xfs.bmbt_compare block map btree compare ops in XFS
Number of block map btree compare operations in XFS block map lookups
@ irix.xfs.bmbt_insrec block map btree insert ops in XFS
Number of block map btree records inserted for XFS files
@ irix.xfs.bmbt_delrec block map btree delete ops in XFS
Number of block map btree records deleted for XFS files
@ irix.xfs.dir_lookup number of file name directory lookups
This is a count of the number of file name directory lookups
in XFS filesystems. It counts only those lookups which miss
in the operating system's directory name lookup cache and must
search the real directory structure for the name in question.
The count is incremented once for each level of a pathname
search that results in a directory lookup.
@ irix.xfs.dir_create number of directory entry creation operations
This is the number of times a new directory entry was created
in XFS filesystems. Each time that a new file, directory,
link, symbolic link, or special file is created in the directory
hierarchy the count is incremented.
@ irix.xfs.dir_remove number of directory entry remove operations
This is the number of times an existing directory entry was
removed in XFS filesystems. Each time that a file, directory,
link, symbolic link, or special file is removed from the
directory hierarchy the count is incremented.
@ irix.xfs.dir_getdents number of times the directory getdents operation is performed
This is the number of times the XFS directory getdents operation
was performed. The getdents operation is used by programs to read
the contents of directories in a file system independent fashion.
This count corresponds exactly to the number of times the getdents(2)
system call was successfully used on an XFS directory.
@ irix.xfs.trans_sync number of synchronous meta-data transactions performed
This is the number of meta-data transactions which waited to be
committed to the on-disk log before allowing the process performing
the transaction to continue. These transactions are slower and
more expensive than asynchronous transactions, because they force
the in memory log buffers to be forced to disk more often and they
wait for the completion of the log buffer writes. Synchronous
transactions include file truncations and all directory updates
when the file system is mounted with the 'wsync' option.
@ irix.xfs.trans_async number of synchronous meta-data transactions performed
This is the number of meta-data transactions which did not wait to be
committed to the on-disk log before allowing the process performing
the transaction to continue. These transactions are faster and more
efficient than synchronous transactions, because they commit their
data to the in memory log buffers without forcing those buffers to
be written to disk. This allows multiple asynchronous transactions
to be committed to disk in a single log buffer write. Most transactions
used in XFS file systems are asynchronous.
@ irix.xfs.trans_empty number of meta-data transactions which committed without changing anything
This is the number of meta-data transactions which did not actually
change anything. These are transactions which were started for some
purpose, but in the end it turned out that no change was necessary.
@ irix.xfs.ig_attempts number of in memory inode lookup operations
This is the number of times the operating system looked for an
XFS inode in the inode cache. Whether the inode was found in
the cache or needed to be read in from the disk is not indicated
here, but this can be computed from the ig_found and ig_missed
counts.
@ irix.xfs.ig_found number of successful in memory inode lookup operations
This is the number of times the operating system looked for an
XFS inode in the inode cache and found it. The closer this
count is to the ig_attempts count the better the inode cache
is performing.
@ irix.xfs.ig_frecycle number of just missed in memory inode lookup operations
This is the number of times the operating system looked for an
XFS inode in the inode cache and saw that it was there but was
unable to use the in memory inode because it was being recycled
by another process.
@ irix.xfs.ig_missed number of failed in memory inode lookup operations
This is the number of times the operating system looked for an
XFS inode in the inode cache and the inode was not there. The
further this count is from the ig_attempts count the better.
@ irix.xfs.ig_dup number of inode cache insertions that fail because the inode is there
This is the number of times the operating system looked for an
XFS inode in the inode cache and found that it was not there but
upon attempting to add the inode to the cache found that another
process had already inserted it.
@ irix.xfs.ig_reclaims number of in memory inode recycle operations
This is the number of times the operating system recycled an
XFS inode from the inode cache in order to use the memory for
that inode for another purpose. Inodes are recycled in order
to keep the inode cache from growing without bound. If the
reclaim rate is high it may be beneficial to raise the
vnode_free_ratio kernel tunable variable to increase the
size of inode cache.
@ irix.xfs.ig_attrchg number of inode attribute change operations
This is the number of times the operating system explicitly changed
the attributes of an XFS inode. For example, this could be to change
the inode's owner, the inode's size, or the inode's timestamps.
@ irix.xfs.log_writes number of buffer writes going to the disk from the log
This variable counts the number of log buffer writes going to the
physical log partitions of all XFS filesystems. Log data traffic
is proportional to the level of meta-data updating. Log buffer
writes get generated when they fill up or external syncs occur.
@ irix.xfs.log_blocks write throughput to the physical XFS log
This variable counts the number of Kbytes of information being written
to the physical log partitions of all XFS filesystems. Log data
traffic is proportional to the level of meta-data updating. The rate
with which log data gets written depends on the size of internal log
buffers and disk write speed. Therefore, filesystems with very high
meta-data updating may need to stripe the log partition or put the log
partition on a separate drive.
@ irix.xfs.log_noiclogs count of failures for immediate get of buffered/internal
This variable keeps track of times when a logged transaction can not
get any log buffer space. When this occurs, all of the internal log
buffers are busy flushing their data to the physical on-disk log.
@ irix.xfs.xfsd_bufs number of buffers processed by the XFS daemons (xfsd)
This is the number of dirty disk buffers flushed out by the XFS
flushing daemons (xfsd). All delayed write, delayed allocation
XFS buffers are written out by the XFS daemons rather than directly
by the generic kernel flushing daemon (bdflushd).
@ irix.xfs.xstrat_bytes number of bytes of data processed by the XFS daemons (xfsd)
This is the number of bytes of file data flushed out by the XFS
flushing daemons (xfsd). It can be used in conjunction with the
xfsd_bufs count to ascertain the average size of the buffers being
processed by the XFS daemons.
@ irix.xfs.xstrat_quick number of buffers processed by the XFS daemons written to contiguous space on disk
This is the number of buffers flushed out by the XFS flushing daemons
which are written to contiguous space on disk. The buffers handled by
the XFS daemons are delayed allocation buffers, so this count gives an
indication of the success of the XFS daemons in allocating contiguous
disk space for the data being flushed to disk.
@ irix.xfs.xstrat_split number of buffers processed by the XFS daemons written to non-contiguous space on disk
This is the number of buffers flushed out by the XFS flushing daemons
which are written to non-contiguous space on disk. The buffers handled
by the XFS daemons are delayed allocation buffers, so this count gives an
indication of the failure of the XFS daemons in allocating contiguous
disk space for the data being flushed to disk. Large values in this
counter indicate that the file system has become fragmented.
@ irix.xfs.write_calls number of XFS file system write operations
This is the number of write(2) system calls made to files in
XFS file systems.
@ irix.xfs.write_bytes number of bytes written in XFS file system write operations
This is the number of bytes written via write(2) system calls to
files in XFS file systems. It can be used in conjunction with the
write_calls count to calculate the average size of the write operations
to files in XFS file systems.
@ irix.xfs.write_bufs number of buffers used in XFS file system write operations
This is the number of operating system disk buffers used to handle
XFS file system write operations.
@ irix.xfs.read_calls number of XFS file system read operations
This is the number of read(2) system calls made to files in
XFS file systems.
@ irix.xfs.read_bytes number of bytes read in XFS file system read operations
This is the number of bytes read via read(2) system calls to
files in XFS file systems. It can be used in conjunction with the
read_calls count to calculate the average size of the read operations
to files in XFS file systems.
@ irix.xfs.read_bufs number of buffers used in XFS file system read operations
This is the number of operating system disk buffers used to handle
XFS file system read operations.
@ irix.xfs.attr_get number of "get" operations on XFS extended file attributes
The number of "get" operations performed on extended file attributes
within XFS filesystems. The "get" operation retrieves the value of an
extended attribute.
@ irix.xfs.attr_set number of "set" operations on XFS extended file attributes
The number of "set" operations performed on extended file attributes
within XFS filesystems. The "set" operation creates and sets the value
of an extended attribute.
@ irix.xfs.attr_remove number of "remove" operations on XFS extended file attributes
The number of "remove" operations performed on extended file attributes
within XFS filesystems. The "remove" operation deletes an extended
attribute.
@ irix.xfs.attr_list number of "list" operations on XFS extended file attributes
The number of "list" operations performed on extended file attributes
within XFS filesystems. The "list" operation retrieves the set of
extended attributes associated with a file.
@ hinv.ncpu Number of CPUs
The number of processors physically configured in the system.
@ hinv.cpuclock CPU clock speed
The MHz rating of each CPU clock.
One some systems, there is one instance of this metric that applies to
all CPUs, on others there is one instance of this metric for each CPU.
@ hinv.mincpuclock Slowest CPU clock speed
The MHz rating of the slowest CPU clock in the system.
@ hinv.maxcpuclock Fastest CPU clock speed
The MHz rating of the fastest CPU clock in the system.
@ hinv.dcache D-cache size
Size of the primary data cache in Kbytes.
@ hinv.icache I-cache size
Size of the primary instruction cache in Kbytes.
@ hinv.secondarycache Secondary cache size
Size of the secondary cache in Kbytes for each CPU.
One some systems, there is one instance of this metric that applies to
all CPUs, on others there is one instance of this metric for each CPU.
@ hinv.cputype CPU type
The abbreviated processor type, e.g. "R4400" or "R10000".
@ hinv.physmem Physical memory size
Mbytes of memory physically installed in the system.
@ hinv.pmeminterleave Physical memory interleave
Interleave factor for the physical memory subsystyem.
@ hinv.ndisk Number of disks
The number of disks physically configured in the system.
@ hinv.disk_sn Disk serial numbers
For each SCSI disk, report the 8-character SCSI serial number. Missing
or inaccessible serial numbers appear as "unknown".
There is one instance of hinv.disk_sn for each disk device instance
appearing in the irix.disk.dev group of metrics.
The appearance of the same serial number more than once indicates a
dual-ported or multi-hosted device, where multiple "disk" names are in
fact aliases for a single physical disk device.
@ hinv.nnode Number of Origin series nodes
The number of Origin series nodes physically configured in the system.
@ hinv.map.cpu Paths to CPUs in hardware graph
The path to a CPU in the hardware graph filesystem.
There is one string-valued instance of this metric for each processor
physically configured in the system.
@ hinv.machine CPU board type name
The machine hardware name as returned by "uname -m", e.g. IP27
@ hinv.ncell Number of Origin cells
The number of running cells on this Origin system.
@ hinv.pagesize Memory page size
The memory page size of the running kernel in bytes.
@ hw.r10kevctr.state R10000 event counter state
The values are
-1 this system does not include R10000 CPUs, so no event counters
0 this system has R10000 CPUs, but all of the global event counters
are disabled ... see ecadmin(1) to enable global event counters
other this system has R10000 CPUs, and this metric reports the number
of the global event counters that have been enabled
@ hw.r10kevctr.cpurev R10000 CPU revision
R10000 CPU revision.
Interpretation of the R1000 event counters is dependent in some
cases on the CPU revision.
@ hw.r10kevctr.cycles R10000 event counter - cycles
R10000 event counter - cycles.
This event counter is incremented once per clock cycle, and
hw.r10kevctr.cycles is the sum over all CPUs.
@ hw.r10kevctr.issue.instri R10000 event counter - instructions issued
R10000 event counter - instructions issued.
This event counter is incremented on each cycle by the sum of the three
following events:
- Integer operations marked as "done" in the active list
- Floating point operations issued to an FPU
- Load/store instructions issued to the address calculation unit on
the previous cycle
hw.r10kevctr.issue.instri is the sum over all CPUs.
@ hw.r10kevctr.issue.loadi R10000 event counter - loads, etc. issued
R10000 event counter - loads, etc. issued.
This counter is incremented when a load instruction was issued to the
address-calculation unit on the previous cycle. Unlike the combined
"issued instructions" count, this counter counts each load instruction
as being issued only once. Prefetches are counted as issued loads in
rev 3.x but not 2.x
hw.r10kevctr.issue.loadi is the sum over all CPUs.
@ hw.r10kevctr.issue.storei R10000 event counter - stores issued
R10000 event counter - stores issued.
The counter is incremented on the cycle after a store instruction is
issued to the address-calculation unit, and hw.r10kevctr.issue.storei
is the sum over all CPUs.
@ hw.r10kevctr.issue.scondi R10000 event counter - store conditionals issued
R10000 event counter - store conditionals issued.
This counter is incremented on the cycle after a store conditional
instruction is issued to the address-calculation unit, and
hw.r10kevctr.issue.scondi is the sum over all CPUs.
@ hw.r10kevctr.fail.scondf R10000 event counter - store conditionals failed
R10000 event counter - store conditionals failed.
This counter is incremented when a store-conditional instruction fails.
A failed store-conditional instruction will, in the normal course of
events, graduate; so this event represents a subset of the
store-conditional instructions counted as hw.r10kevctr.grad.scondg.
hw.r10kevctr.fail.scondf is the sum over all CPUs.
@ hw.r10kevctr.issue.brd R10000 event counter - branches decoded
R10000 event counter - branches decoded.
In rev 2.6 and earlier revisions, this counter is incremented when a
branch (conditional or unconditional) instruction is decoded (include
those aborted & resolved) and inserted into the active list; even
though, it may still be killed due to an exception or a prior
mispredicted branch.
For rev 3.x, this counter is incremented when a conditional branch is
determined to have been "resolved". Note that when multiple
floating-point conditional branches are resolved in a single cycle,
this counter is still only incremented by one. Although this is a rare
event, in this case the count would be incorrect.
hw.r10kevctr.issue.brd is the sum over all CPUs.
@ hw.r10kevctr.scache.wb R10000 event counter - quadwords written back from secondary cache
R10000 event counter - quadwords written back from secondary cache.
This counter is incremented once each cycle that a quadword of data is
written back from the secondary cache to the outgoing buffer located in
the on-chip system-interface unit, and hw.r10kevctr.scache.wb is the
sum over all CPUs.
@ hw.r10kevctr.scache.ecc R10000 event counter - single-bit ECC errors on secondary cache data
R10000 event counter - single-bit ECC errors on secondary cache data.
This counter is incremented on the cycle after the correction of a
single-bit error on a quadword read from the secondary cache data
array, and hw.r10kevctr.scache.ecc is the sum over all CPUs.
@ hw.r10kevctr.pcache.imiss R10000 event counter - primary cache instruction misses
R10000 event counter - primary cache instruction misses.
This counter is incremented one cycle after an instruction refill
request is sent to the Secondary Cache Transaction Processing logic.
hw.r10kevctr.pcache.imiss is the sum over all CPUs.
@ hw.r10kevctr.scache.imiss R10000 event counter - secondary cache instruction misses
R10000 event counter - secondary cache instruction misses.
This counter is incremented the cycle after the last quadword of a
primary instruction cache line is written from the main memory, while
the secondary cache refill continues.
hw.r10kevctr.scache.imiss is the sum over all CPUs.
@ hw.r10kevctr.scache.iwaymp R10000 event counter - secondary cache instruction way misprediction
R10000 event counter - secondary cache instruction way misprediction.
This counter is incremented when the secondary cache controller begins
to retry an access to the secondary cache after it hit in the
non-predicted way, provided the secondary cache access was initiated by
the primary instruction cache.
hw.r10kevctr.scache.iwaymp is the sum over all CPUs.
@ hw.r10kevctr.extint R10000 event counter - external intervention requests
R10000 event counter - external intervention requests.
This counter is incremented on the cycle after an external intervention
request enters the Secondary Cache Transaction Processing logic.
hw.r10kevctr.extint is the sum over all CPUs.
@ hw.r10kevctr.extinv R10000 event counter - external invalidate requests
R10000 event counter - external invalidate requests.
This counter is incremented on the cycle after an external invalidate
request enters the Secondary Cache Transaction Processing logic.
hw.r10kevctr.extinv is the sum over all CPUs.
@ hw.r10kevctr.vcc R10000 event counter - virtual coherency condition
R10000 event counter - virtual coherency condition.
This counter is incremented on the cycle after a virtual address
coherence condition is detected, provided that the access was not
flagged as a miss. This condition can only be realized for virtual
page sizes of 4 Kbyte.
hw.r10kevctr.vcc is the sum over all CPUs, but is not available for
R10000 CPUs at rev 3.x or later, where this event is replaced by
hw.r10kevctr.fucomp.
@ hw.r10kevctr.fucomp R10000 event counter - ALU/FPU completion cycles
R10000 event counter that accumulates the number ALU/FPU completion
cycles.
This counter is incremented on the cycle after either ALU1, ALU2, FPU1,
or FPU2 marks an instruction as "done."
hw.r10kevctr.fucomp is the sum over all CPUs, but is only available for
R10000 CPUs at rev 3.x or later, where this event replaces
hw.r10kevctr.vcc that was available on the earlier revisions of the
R10000 CPUs.
@ hw.r10kevctr.grad.instrg R10000 event counter - instructions graduated
R10000 event counter - instructions graduated.
This counter is incremented by the number of instructions that were
graduated on the previous cycle. When an integer multiply or divide
instruction graduates, it is counted as two graduated instructions.
hw.r10kevctr.grad.instrg is the sum over all CPUs.
@ hw.r10kevctr.grad.loadg R10000 event counter - loads graduated
R10000 event counter - loads graduated.
In rev 2.x, if a store graduates on a given cycle, all loads which
graduate on that cycle do not increment this counter. Prefetch
instructions are included in this count.
In rev 3.x this behavior is changed so that all graduated loads (loads,
prefetches, sync and cacheops) are counted as they graduated on the
previous cycle. Up to four of these instructions can graduate in one
cycle.
hw.r10kevctr.grad.loadg is the sum over all CPUs.
@ hw.r10kevctr.grad.storeg R10000 event counter - stores graduated
R10000 event counter - stores graduated.
Each graduating store (including store-conditionals) increments the
counter. At most one store can graduate per cycle.
hw.r10kevctr.grad.storeg is the sum over all CPUs.
@ hw.r10kevctr.grad.scondg R10000 event counter - store conditionals graduated
R10000 event counter - store conditionals graduated.
At most, one store-conditional can graduate per cycle. This counter is
incremented on the cycle following the graduation of a
store-conditional instruction. Both failed and successful
store-conditional instructions are included in this count; so
successful store-conditionals can be determined as the difference
between this metric and hw.r10kevctr.fail.scondf.
hw.r10kevctr.grad.scondg is the sum over all CPUs.
@ hw.r10kevctr.grad.fp R10000 event counter - floating point instructions graduated
R10000 event counter - floating point instructions graduated.
This counter is incremented by the number of FP instructions which
graduated on the previous cycle. Any instruction that sets the FP
Status register bits (EVZOUI) is counted as a graduated floating point
instruction. There can be 0 to 4 such instructions each cycle.
Note that conditional-branches based on FP condition codes and
Floating-point load and store instructions are not included in this
count.
hw.r10kevctr.grad.fp is the sum over all CPUs.
@ hw.r10kevctr.pcache.wb R10000 event counter - quadwords written back from primary cache
R10000 event counter - quadwords written back from primary cache.
This counter is incremented once each cycle that a quadword of data is
valid and being written from primary data cache to secondary cache, and
hw.r10kevctr.pcache.wb is the sum over all CPUs.
@ hw.r10kevctr.tlb R10000 event counter - TLB refill exceptions
R10000 event counter - TLB refill exceptions.
This counter is incremented on the cycle after the TLB miss handler is
invoked. All TLB misses are counted, whether they occur in the native
code or within the TLB handler.
hw.r10kevctr.tlb is the sum over all CPUs.
@ hw.r10kevctr.fail.brmp R10000 event counter - branches mispredicted
R10000 event counter - branches mispredicted.
This counter is incremented on the cycle after a branch is restored
because of misprediction. The misprediction is determined on the same
cycle that the conditional branch is resolved.
For rev 3.x, the misprediction rate is the ratio of the branch
mispredicted count to the conditional branch resolved.
For rev 2.x, the misprediction rate cannot be precisely determined,
because the decoded branches count includes unconditional branches as
well as conditional branches which are never resolved (due to prior
mispredictions or later interrupts).
hw.r10kevctr.fail.brmp is the sum over all CPUs.
@ hw.r10kevctr.pcache.dmiss R10000 event counter - primary cache data misses
R10000 event counter - primary cache data misses.
This counter is incremented one cycle after a request to refill a line
of the primary data cache is entered into the Secondary Cache
Transaction Processing logic.
hw.r10kevctr.pcache.dmiss is the sum over all CPUs.
@ hw.r10kevctr.scache.dmiss R10000 event counter - secondary cache data misses
R10000 event counter - secondary cache data misses.
This counter is incremented the cycle after the second quadword of a
data cache line is written from the main memory, while the secondary
cache refill continues.
hw.r10kevctr.scache.dmiss is the sum over all CPUs.
@ hw.r10kevctr.scache.dwaymp R10000 event counter - secondary cache data way misprediction
R10000 event counter - secondary cache data way misprediction.
This counter is incremented when the secondary cache controller begins
to retry an access to the secondary cache because it hit in the
non-predicted way, provided the secondary cache access was initiated by
the primary data cache.
hw.r10kevctr.scache.dwaymp is the sum over all CPUs.
@ hw.r10kevctr.scache.extinthit R10000 event counter - external intervention hits in secondary cache
R10000 event counter - external intervention hits in secondary cache.
This counter is incremented on the cycle after an external intervention
request is determined to have hit in the secondary cache, and
hw.r10kevctr.scache.extinthit is the sum over all CPUs.
@ hw.r10kevctr.scache.extinvhit R10000 event counter - external invalidate hits in secondary cache
R10000 event counter - external invalidate hits in secondary cache.
This counter is incremented on the cycle after an external invalidate
request is determined to have hit in the secondary cache, amd
hw.r10kevctr.scache.extinvhit is the sum over all CPUs.
@ hw.r10kevctr.scache.upclean R10000 event counter - upgrade requests on clean secondary cache lines
R10000 event counter - upgrade requests on clean secondary cache lines.
This counter is incremented on the cycle after a request to change the
Clean Exclusive state of the targeted secondary cache line to Dirty
Exclusive is sent to the Secondary Cache Transaction Processing logic.
hw.r10kevctr.scache.upclean is the sum over all CPUs.
@ hw.r10kevctr.scache.upshare R10000 event counter - upgrade requests on shared secondary cache lines
R10000 event counter - upgrade requests on shared
secondary cache lines.
This counter is incremented on the cycle after a request to change the
Shared state of the targeted secondary cache line to Dirty Exclusive is
sent to the Secondary Cache Transaction Processing logic.
hw.r10kevctr.scache.upshare is the sum over all CPUs.
@ irix.engr.one - placeholder for IRIX diagnostic performance metrics
In the process of IRIX development and/or problem resolution the
"irix.engr" group of performance metrics may be used to provide access
to additional or diagnostic or prototype kernel instrumentation.
These performance metrics are unlikely to provide any values except for
systems running pre-released or specially patched IRIX kernels.
@ irix.engr.two - placeholder for IRIX diagnostic performance metrics
In the process of IRIX development and/or problem resolution the
"irix.engr" group of performance metrics may be used to provide access
to additional or diagnostic or prototype kernel instrumentation.
These performance metrics are unlikely to provide any values except for
systems running pre-released or specially patched IRIX kernels.
@ irix.engr.three - placeholder for IRIX diagnostic performance metrics
In the process of IRIX development and/or problem resolution the
"irix.engr" group of performance metrics may be used to provide access
to additional or diagnostic or prototype kernel instrumentation.
These performance metrics are unlikely to provide any values except for
systems running pre-released or specially patched IRIX kernels.
@ irix.engr.four - placeholder for IRIX diagnostic performance metrics
In the process of IRIX development and/or problem resolution the
"irix.engr" group of performance metrics may be used to provide access
to additional or diagnostic or prototype kernel instrumentation.
These performance metrics are unlikely to provide any values except for
systems running pre-released or specially patched IRIX kernels.
@ irix.kaio.reads Number of kaio read requests
Cumulative number of Kernel AIO read requests since system boot time.
@ irix.kaio.writes Number of kaio write requests
Cumulative number of Kernel AIO write requests since system boot time.
@ irix.kaio.read_bytes Number of kaio bytes read
Cumulative number of bytes read via Kernel AIO requests since system
boot time.
@ irix.kaio.write_bytes Number of kaio bytes written
Cumulative number of bytes written via Kernel AIO requests since system
boot time.
@ irix.kaio.free Number of free kaio control buffers
Current number of Kernel AIO control buffers on the global free list.
irix.kaio.inuse + irix.kaio.free should be equal to the total number of
allocated Kernel AIO control buffers, i.e. max_sys_aio defined in
/var/sysgen/master.d/kaio
@ irix.kaio.inuse Number of outstanding kaio requests
Current number of Kernel AIO control buffers not on the global free
list.
irix.kaio.inuse + irix.kaio.free should be equal to the total number of
allocated Kernel AIO control buffers, i.e. max_sys_aio defined in
/var/sysgen/master.d/kaio
@ irix.kaio.proc_maxinuse Largest kaio per-process free list
The biggest per-process free list size for Kernel AIO service requests
that has been observed since system boot time.
@ irix.kaio.nobuf Number of times a caller blocked waiting for a free kaio request buffer
Callers on the Kernel AIO service require a request buffer (kaio
header). There are a finite number of these buffers (max_sys_aio
defined in /var/sysgen/master.d/kaio), and the buffers are kept on
global and per-process free lists.
This metric measures the number of times (since system boot time) that
a request for such a buffer was turned down because no buffers were
available on that process's per-process free list or on the global free
list.
@ irix.kaio.errors Number of kaio errors
Cumulative number of Kernel AIO errors since system boot time.
@ irix.kaio.inprogress Number of kaio operations in progress
Number of Kernel AIO operations in the state between "Kernel AIO
read()/write() system call made" and "interrupt received from disk
driver and processed by Kernel AIO".
@ hinv.nrouter number of CrayLink routers in the system
The number of CrayLink routers in the system that have one or more
connected ports.
@ hinv.nrouterport number of connected CrayLink router ports in the system
The total number of connected ports on all CrayLink routers in the
system.
@ hinv.map.router path to a CrayLink router in the hardware graph
The path to a CrayLink router in the hardware graph filesystem.
There is one string-valued instance of this metric for each CrayLink
router configured in the system.
@ hw.router.portmask active port mask for each CrayLink router
Non-zero bits in the value of this metric indicate CrayLink router
ports that are connected in the system. Zero bits indicate the
corresponding router port is not connected.
There is one instance of this metric for each CrayLink router in the
system.
@ hw.router.rev_id router chip version id number for each CrayLink router
The CrayLink router chip version identification number.
There is one instance of this metric for each CrayLink router in the
system.
@ hw.router.send_util average send utilization for each CrayLink router
The send utilization averaged over all connected ports on a CrayLink
router.
A value of 100% indicates all connected ports on a particular CrayLink
router are sending at maximum capacity. This is independent of the
receive utilization (see hw.router.recv.total_util).
In practice, it is rare to observe sustained CrayLink router
utilization exceeding 30%. See the description for
hw.router.perport.send_util for details.
There is one instance of this metric for each CrayLink router in the
system.
@ hw.router.recv.total_util average receive utilization for each CrayLink router
The receive utilization averaged over all connected ports on a CrayLink
router.
A value of 100% indicates all connected ports on a particular CrayLink
router are receiving at maximum capacity. This is independent of the
send utilization (see hw.router.send_util).
In practice, it is rare to observe sustained CrayLink router
utilization exceeding 30%. See the description for
hw.router.perport.recv.total_util for details.
There is one instance of this metric for each CrayLink router in the
system.
@ hw.router.recv.bypass_util average receive bypass utilization for each CrayLink router
The average of hw.router.perport.recv.bypass_util for all connected
ports on each CrayLink router.
See hw.router.perport.recv.bypass_util for details.
There is one instance of this metric for each CrayLink router in the
system.
@ hw.router.recv.queued_util average receive queued utilization for each CrayLink router
The average of hw.router.perport.recv.queued_util for all connected
ports on each CrayLink router.
See hw.router.perport.recv.queued_util for details.
There is one instance of this metric for each CrayLink router in the
system.
@ hw.router.retry_errors total retry errors for each CrayLink router
The total number of retry errors on all connected ports for each
CrayLink router in the system. This counter is normally converted to a
rate (per-second) by client tools.
There is one instance of this metric for each CrayLink router in the
system.
Retry errors are not fatal, however persistent occurrence of these
errors may indicate a CrayLink interconnect problem. Refer to
hw.router.perport.retry_errors to identify the particular ports that
are contributing to the aggregated count for the router.
@ hw.router.sn_errors total sequence number errors for each CrayLink router
The total number of sequence number errors on all connected ports for
each CrayLink router in the system. This counter is normally converted
to a rate (per-second) by client tools.
On some early versions of the CrayLink router and hub this metric may
include some normal transactions, i.e. some non-sequence errors may be
incorrectly counted as sequence errors.
There is one instance of this metric for each CrayLink router in the
system.
Do not be alarmed by sequence number errors, these are expected
in normal operation and not fatal.
@ hw.router.cb_errors total checkbit errors for each CrayLink router
The total number of checkbit errors on all connected ports for each
CrayLink router in the system. This counter is normally converted to a
rate (per-second) by client tools.
There is one instance of this metric for each CrayLink router in the
system.
Checkbit errors are not fatal, however persistent occurrence of these
errors may indicate a CrayLink interconnect problem. Refer to
hw.router.perport.cb_errors to identify the particular ports that
are contributing to the aggregated count for the router.
@ hw.router.perport.send_util send utilization for each CrayLink router port
The utilization of the send bandwidth for each CrayLink router port,
computed from the statistically sampled averages reported by the
hardware.
A value of 100% indicates a port has reached its maximum send
capacity. CrayLink routers support independent send and receive
channels on each port and so this metric is independent of the
corresponding receive utilization (see hw.router.recv.total_util).
In practice, it is rare to observe sustained CrayLink router port
utilization exceeding 30%.
There is one instance of this metric for each connected port on each
CrayLink router in the system.
@ hw.router.perport.recv.total_util receive utilization for each CrayLink router port
The utilization of the receive bandwidth for each CrayLink router port,
computed from the statistically sampled averages reported by the
hardware.
A value of 100% indicates a port has reached its maximum receive
capacity. CrayLink routers support independent send and receive
channels on each port and so this metric is independent of the
corresponding send utilization (see hw.router.send_util).
In practice, it is rare to observe sustained CrayLink router port
utilization exceeding 30%.
Values for this metric equal hw.router.perport.recv.bypass_util added
to hw.router.perport.recv.queued_util utilization for each CrayLink
router port.
There is one instance of this metric for each connected port on each
CrayLink router in the system.
@ hw.router.perport.recv.bypass_util receive bypass utilization for each CrayLink router port
The utilization of the receive bandwidth for each CrayLink router port
for packets that could be processed without first being queued in the
DAMQ (Dynamically Allocated Memory Queue). Packets which bypass the
DAMQ queue incur lower transmission latencies.
Values for this metric added to the corresponding values for the
hw.router.perport.recv.queued_util metric equal the value of the
hw.router.perport.recv.total_util metric.
There is one instance of this metric for each connected port on each
CrayLink router in the system.
@ hw.router.perport.recv.queued_util receive queued utilization for each CrayLink router port
The utilization of the receive bandwidth for each CrayLink router port
for packets that could not be processed without first being queued in
the DAMQ (Dynamically Allocated Memory Queue). Packets which do not
bypass the DAMQ incur higher transmission latencies.
Values for this metric added to the corresponding values for the
hw.router.perport.recv.bypass_util equal hw.router.perport.recv.total_util.
There is one instance of this metric for each connected port on each
CrayLink router in the system.
@ hw.router.perport.retry_errors retry errors for each CrayLink router port
The number of retry errors for each connected port on each CrayLink
router in the system. This counter is normally expressed as a rate
per-second by client tools.
There is one instance of this metric for each connected port on each
CrayLink router in the system.
Retry errors are not fatal, however persistent occurrence of these
errors may indicate a CrayLink interconnect problem.
@ hw.router.perport.sn_errors sequence number errors for each CrayLink router port
The number of sequence number errors for each connected port on each
CrayLink router in the system. This counter is normally converted to a
rate (per-second) by client tools.
On some early versions of the CrayLink router and hub this metric may
include some normal transactions, i.e. some non-sequence errors may be
incorrectly counted as sequence errors.
There is one instance of this metric for each connected port on each
CrayLink router in the system.
Do not be alarmed by sequence number errors, these are expected
in normal operation and not fatal.
@ hw.router.perport.cb_errors checkbit errors for each CrayLink router port
The number of checkbit errors for each connected port on each CrayLink
router in the system. This counter is normally converted to a rate
(per-second) by client tools.
There is one instance of this metric for each connected port on each
CrayLink router in the system.
Checkbit errors are not fatal, however persistent occurrence of these
errors may indicate a CrayLink interconnect problem.
@ hw.router.perport.excess_errors excessive errors flag for each CrayLink router port
The number of error "types", i.e. retry or checkbit, that exceed an
effective rate of 500 per minute over the last polling period.
A non-zero value (i.e. 1 or 2) indicates excessive errors on the port.
There is one instance of this metric for each connected port on each
CrayLink router in the system.
@ hinv.map.routerport path to a CrayLink router port in hardware graph
The path to a CrayLink router port in the hardware graph filesystem.
There is one string-valued instance of this metric for each connected
port on each CrayLink router in the system.
@ hinv.interconnect interconnection endpoint in hardware graph filesystem for each CrayLink router port
There is one string valued instance of this metric for each connected
port on each CrayLink router in the system. The metric's value is the
path to the destination node or destination CrayLink router port in the
hardware graph filesystem.
The values for this metric, in conjunction with the external instance
identifiers (i.e. CrayLink router ports), are sufficient to determine
the connection topology of an Origin series system.
Considering nodes and CrayLink routers to be vertices in the system
topology and CrayLink router ports to be arcs, the external instance
identifiers provide the name of the source of each link (i.e. a
specific port on a CrayLink router) and the metric instance values
provide the name of the destination (i.e. another CrayLink router or an
Origin series node).
@ hw.router.type Router type
The type of Origin router. Possible values are:
0 - normal router with at most two nodes, or a star router with up to 4 nodes
1 - metarouter which has only router connections for larger configurations
@ irix.numa.routerload instantaneous NUMA load on CrayLink routers
Instantaneous percentage load on CrayLink routers sampled by the
traffic control daemon which controls the local thresholds.
There is one instance of this metric for each Origin series node in
the system.
@ irix.numa.migr.threshold NUMA migration threshold percentage
Last set migration threshold percentage for each Origin series node in
the system.
@ irix.numa.migr.intr.total NUMA migration interrupts
Number of NUMA migration interrupts for each Origin series node in the
system.
@ irix.numa.migr.intr.failstate ignored NUMA interrupts due to inappropriate state
Number of ignored NUMA interrupts due to inappropriate state.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.intr.failenabled ignored NUMA interrupts due to disabled auto migration
Number of ignored NUMA interrupts due to disabled auto migration.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.intr.failfrozen ignored NUMA interrupts due to frozen page
Number of ignored NUMA interrupts due to frozen page.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.auto.total total automatic successful NUMA migrations
Total number of automatic successful NUMA migrations to and from a
node.
[derived as sum of irix.numa.migr.auto.in and irix.numa.migr.auto.out].
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.auto.in automatic successful NUMA migrations to each node
Number of automatic successful NUMA migrations to a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.auto.out automatic successful NUMA migrations from each node
Number of automatic successful NUMA migrations from a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.auto.fail failed automatic NUMA migrations from each node
Number of failed automatic NUMA migrations from a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.auto.queue_len queued automatic NUMA migration requests for each node
Number of queued automatic NUMA migration requests for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.user.total Total number of user initiated NUMA migrations to and from each node
Total number of user initiated NUMA migrations to and from a node.
[derived as sum of irix.numa.migr.user.in and irix.numa.migr.user.out]
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.user.in user initiated NUMA migrations to each node
Number of user initiated NUMA migrations to a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.user.out user initiated NUMA migrations from each node
Number of user initiated NUMA migrations from a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.user.fail failed user initiated NUMA migrations from each node
Number of failed user initiated NUMA migrations from a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.user.queue_len enqueued user NUMA migrations for each node
Number of enqueued user NUMA migrations for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.queue.total Total number of queued successful NUMA migrations to and from each node
Total number of queued successful NUMA migrations to and from a node
[derived from irix.numa.migr.queue.in and irix.numa.migr.queue.out]
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.queue.in queued successful NUMA migrations into each node
Number of queued successful NUMA migrations into a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.queue.out number of queued successful NUMA migrations out of each node
Number of queued successful NUMA migrations out of a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.queue.fail.total queued failed NUMA migrations for each node
Number of queued failed NUMA migrations for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.queue.fail.overflow queuing failures due to queue overflow for each node
Number of queuing failures due to queue overflow for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.queue.fail.state queuing failures due to invalid state for each node
Number of queuing failures due to invalid state for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.queue.fail.failq queued pages not migrated because of invalid state for each node
Number of queued pages not migrated because of invalid state for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.coalesc.total total number of coalescing daemon NUMA migrations
Total number of coalescing daemon NUMA migrations to and from a node.
[derived from irix.numa.migr.coalesc.in and irix.numa.migr.coalesc.out]
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.coalesc.in coalescing daemon NUMA migrations to each node
Number of coalescing daemon NUMA migrations to a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.coalesc.out coalescing daemon NUMA migrations from each node
Number of coalescing daemon NUMA migrations from a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.coalesc.fail failed coalescing daemon NUMA migrations for each node
Number of failed coalescing daemon NUMA migrations from a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.triggers.capacity number of Queue Migration Capacity Triggers for each node
Number of Queue Migration Capacity Triggers.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.triggers.time number of Queue Migration Time Triggers for each node
Number of Queue Migration Time Triggers.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.migr.triggers.traffic number of Queue Migration Traffic Triggers for each node
Number of Queue Migration Traffic Triggers.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.memory.lent Memory lent to other nodes over all nodes
Memory in Kbytes that has been lent to other nodes requiring more memory.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.memory.replicated.page_count replicated pages in the system for each node
Number of replicated pages at a particular Origin series node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.memory.replicated.page_dest times each node was target of replication for each node
Number of times this node was target of replication from some other
node in the system.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.memory.replicated.page_reuse times a replicated page on each node was reused for each node
Number of times a replicated page was reused for each node on the system.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.memory.replicated.page_notavail failures to allocate a page for creating replication for each node
Count of failures to allocate a page for creating replication for a
node in the system.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.memory.replicated.control_refuse refusal to replicate by the replication controller for each node
Count of refusal to replicate by the replication controller for a node
in the system.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.unpegger.calls number of times the unpegger has been called for each node
Number of times the unpegger has been called for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.unpegger.npages number of pages unpegged so far for each node
Number of pages unpegged so far for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.unpegger.last_npages number of pages unpegged during the last unpegging cycle for each node
Number of pages unpegged during the last unpegging cycle for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.bouncectl.calls number of bounce control cycles for each node
Number of bounce control cycles for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.bouncectl.frozen_pages number of frozen pages so far for each node
Number of frozen pages so far for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.bouncectl.melt_pages number of melt pages so far for each node
Number of melt pages so far for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.bouncectl.last_melt_pages number of melt pages in last cycle for each node
Number of melt pages in last cycle for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.bouncectl.dampened_pages number of dampened pages for each node
Number of dampened pages for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.memoryd.activations Memory daemon total activations for each node
Memory daemon total activations for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.numa.memoryd.periodic Memory daemon periodic activations for each node
Memory daemon periodic activations for a node.
There is one instance of this metric for each Origin series node in the
system.
@ irix.node.physmem total physical memory per Origin series node
The total physical memory installed on each Origin series node in
bytes.
@ irix.node.free.total total free physical memory per Origin series node
The total unallocated physical memory on each Origin series node in
bytes.
@ irix.node.free.pages_64k number of free 64Kbyte pages per Origin series node
The number of 64Kbyte pages that are free in the "large page size
memory pool" on each Origin series node.
@ irix.node.free.pages_256k number of free 256Kbyte pages per Origin series node
The number of 256Kbyte pages that are free in the "large page size
memory pool" on each Origin series node.
@ irix.node.free.pages_1m number of free 1Mbyte pages per Origin series node
The number of 1Mbyte pages that are free in the "large page size memory
pool" on each Origin series node.
@ irix.node.free.pages_4m number of free 4Mbyte pages per Origin series node
The number of 4Mbyte pages that are free in the "large page size memory
pool" on each Origin series node.
@ irix.node.free.pages_16m number of free 16Mbyte pages per Origin series node
The number of 16Mbyte pages that are free in the "large page size
memory pool" on each Origin series node.
@ hinv.map.node paths to Origin series nodes in the hardware graph
The path to a Origin series node in the hardware graph filesystem.
There is one string-valued instance of this metric for each Origin
series node physically configured in the system.
@ irix.mem.lpage.coalesce.scans large page coalescion attempts
Cumulative number of scans made by the coalescing daemon over the
entire set of pages. The daemon periodically scans all pages in memory
to see if they can be coalesced, and is also active when a requested
page size is unavailable.
@ irix.mem.lpage.coalesce.success successful large page merges
Cumulative number of successful large page merges by the coalescing
daemon.
@ irix.mem.lpage.faults count of large page faults
Cumulative number of large page faults that were successfully
satisfied.
@ irix.mem.lpage.allocs count of vfault large page allocations
As part of a large page fault, vfault() may request allocation of a
large page. This metric denotes the number of such requests that were
satisfied.
@ irix.mem.lpage.downgrade count of large page downgrades
Cumulative number of page allocation failures where the large page
requested cannot be immediately provided, so the even and odd address
of the larger page size is downgraded to base page size, and this
smaller page is allocated.
@ irix.mem.lpage.page_splits count of large page splits
Cumulative number of splits of large pages to satisfy requests for
smaller sized pages.
@ irix.mem.lpage.basesize minimum page size
The base (smallest) page size supported by the kernel, in Kilobytes.
@ irix.mem.lpage.maxsize maximum page size
The maximum page size supported by the kernel, in Kilobytes.
@ irix.mem.lpage.maxenabled maximum enabled page size
The maximum enabled page size, in Kilobytes.
@ irix.mem.lpage.enabled enabled large page sizes
Set of large page sizes which are currently enabled, in Kilobytes.
Range of values is 16, 64, 256, 1024, 4096, and 16384.
@ irix.pmda.version libirixpmda build version
The version of libirixpmda.so being used by pmcd(1).
The format follows the INSTVERSIONNUM version number that is understood
and decoded by uname(1) when the -V option is used.
@ irix.pmda.uname identify the current IRIX system
The value of this metric is equivalent to running the command
"uname -a" for the system on which pmcd is running.
@ irix.pmda.reset libirixpmda reset
Storing any value into this metric will cause all modules in the IRIX
PMDA to execute a "reset" to rediscover the configuration of the
system and the instance domains behind many of the performance metrics
exported by libirixpmda.
This is most useful if some hardware component or IRIX software
subsystem has changed status since the time pmcd(1) was started, and
the user wishes to force libirixpmda to re-evaluate the configuration
without restarting pmcd(1).
@ irix.pmda.debug libirixpmda diagnostic/debug verbosity
Storing values into this metric with pmstore(1) allows the level of
diagnostic output from libirixpmda.so to be controlled.
Note this control is independent of the metric pmcd.debug which
controls debugging in the libpcp.so routines for both pmcd(1) and all
PMDAs attached to pmcd as DSOs (of which libirixpmda is the most common
example).
By default, the diagnostic output will be written to the file
/var/adm/pcplog/pmcd.log.
@ hinv.nxbow Number of xbows
The number of configured xbows.
@ hinv.map.xbow Patchs to xbows in hardware graph
The path to a xbow in the hardware graph filesystem.
There is one string-valued instance of this metric for each xbow
physically configured in the system.
@ irix.xbow.active.xbows Number of monitored xbows
The current number of actively monitored xbows.
@ irix.xbow.active.ports Number of monitored xbow ports
The number of actively monitored xbow ports. This metric is dependent
on which xbows are monitored, and is the sum of irix.xbow.nports.
@ irix.xbow.switch Switch xbow monitoring on/off
Storing a non-zero value with pmstore(1) into this metric will turn the
monitoring on for this xbow instance. Storing a value of zero will turn
the monitoring off (default).
@ irix.xbow.nports Number of monitored ports on each xbow
The number of ports that are actively monitored in each xbow.
If the xbow monitoring is switched off (see irix.xbow.map.switch) this metric
will be zero regardless of the number of ports in active use.
@ irix.xbow.total.src total bytes sent from source
The total number of bytes that have been sent from the source links
of this xbow.
This metric is the sum of irix.xbow.port.src for all active ports on this xbow.
@ irix.xbow.total.dst total bytes received at destination
The total number of bytes that have been received at the destination
links of this xbow.
This metric is the sum of irix.xbow.port.dst for all active ports on this xbow.
@ irix.xbow.total.rrcv total receive retries
The total number of link level protocol retries sent by this xbow when
receiving micropackets.
This metric is the sum of irix.xbow.port.rrcv for all active ports on this
xbow.
@ irix.xbow.total.rxmt total transmit retries
The total number of link level protocol retries sent by this xbow when
transmitting micropackets.
This metric is the sum of irix.xbow.port.rxmt for all active ports on this
xbow.
@ irix.xbow.port.flags xbow port mode
The contents of the link auxiliary status register for this xbow port.
This metric is exported from xb_vcounter_t.flags and is equivalent to
the flags field in xbstat(1).
@ irix.xbow.port.src bytes sent from source
The number of bytes that have been sent from the source link of
this port.
This metric is exported from xb_vcounter_t.vsrc and is equivalent to
the Source field in xbstat(1).
@ irix.xbow.port.dst bytes received at destination
The number of bytes that have been received at the destination
link of this port.
This metric is exported from xb_vcounter_t.vdst and is equivalent to
the Destination field in xbstat(1).
@ irix.xbow.port.rrcv llp receive retries
The number of link level protocol retries sent on this port when receiving
micropackets.
This metric is exported from xb_vcounter_t.crcv and is equivalent to
the RcRtry field in xbstat(1).
@ irix.xbow.port.rxmt llp transmit retries
The number of link level protocol retries sent on this port when transmitting
micropackets.
This metric is exported from xb_vcounter_t.cxmt and is equivalent to
the TxRtry field in xbstat(1).
@ irix.xbow.gen xbow metrics generation number
This metric is a generation number for each xbow which is incremented
when the state of monitoring (on or off) changes due to the use of pmstore(1)
on irix.xbow.switch. Tools can use this metric to detect deactivation
and activation of xbow statistics which may affect the accuracy of xbow
metric values.
@ irix.network.socket.type number of open sockets
The number of open sockets for each type of socket.
This metric is exported from sockstat.open_sock in sys/tcpipstats.h.
@ irix.network.socket.state number of sockets in each state
The number of TCP sockets in each TCP state.
This metric is exported from sockstat.tcp_sock in sys/tcpipstats.h.
@ hw.hub.ni.retry hub network interface retries
The number of retries on the hub network interface.
This metric is exported from hubstat_t.hs_ni_retry_errors in sys/SN0/hubstat.h
(IRIX 6.4) or sys/SN/SN0/hubstat.h in later IRIX releases. This metric is
equivalent to NI: Retries in linkstat(1).
@ hw.hub.ni.sn_errors hub network interface sequence number errors.
The number of sequence number errors on the hub network interface.
This metric is exported from hubstat_t.hs_ni_sn_errors in sys/SN0/hubstat.h
(IRIX 6.4) or sys/SN/SN0/hubstat.h in later IRIX releases. This metric is
equivalent to NI: SN errs in linkstat(1).
@ hw.hub.ni.cb_errors hub network interface checkbit errors.
The number of checkbit errors on the hub network interface.
This metric is exported from hubstat_t.hs_ni_cb_errors in sys/SN0/hubstat.h
(IRIX 6.4) or sys/SN/SN0/hubstat.h in later IRIX releases. This metric is
equivalent to NI: CB errs in linkstat(1).
@ hw.hub.ni.overflows hub network interface counter overflows
The number of counter overflows on the hub network interface.
This metric is exported from hubstat_t.hs_ni_overflows in sys/SN0/hubstat.h
(IRIX 6.4) or sys/SN/SN0/hubstat.h in later IRIX releases.
@ hw.hub.ii.sn_errors hub I/O interface sequence number errors.
The number of sequence number errors on the hub I/O interface.
This metric is exported from hubstat_t.hs_ii_sn_errors in sys/SN0/hubstat.h
(IRIX 6.4) or sys/SN/SN0/hubstat.h in later IRIX releases. This metric is
equivalent to II: SN errs in linkstat(1).
@ hw.hub.ii.cb_errors hub I/O interface checkbit errors.
The number of checkbit errors on the hub I/O interface.
This metric is exported from hubstat_t.hs_ii_cb_errors in sys/SN0/hubstat.h
(IRIX 6.4) or sys/SN/SN0/hubstat.h in later IRIX releases. This metric is
equivalent to II: CB errs in linkstat(1).
@ hw.hub.ii.overflows hub I/O interface counter overflows
The number of counter overflows on the hub I/O interface.
This metric is exported from hubstat_t.hs_ii_overflows in sys/SN0/hubstat.h
(IRIX 6.4) or sys/SN/SN0/hubstat.h in later IRIX releases.
@ hw.hub.nasid unique hub identifier
@ hinv.nxlv_volumes Number of configured XLV subvolumes
The number of configured XLV subvolumes on this system.
See xlv_mgr(1) for more information.
@ irix.xlv.read Number of read operations on each XLV subvolume
@ irix.xlv.write Number of write operations on each XLV subvolume
@ irix.xlv.read_bytes Number of Kbytes read from each XLV subvolume
@ irix.xlv.write_bytes Number of Kbytes written to each XLV subvolume
@ irix.xlv.stripe_ops Number of operations to striped volume elements of each XLV subvolume
For XLV subvolumes with striped volume elements, this is the total
number of read and write operations to the component volume elements.
Depending on the volume geometry, and whether or not the subvolume
is plexed, this may not match the number of read and write operations
given by irix.xlv.read + irix.xlv.write for the same subvolume.
@ irix.xlv.stripe_units Number of stripe units involved in operations to striped volume elements
The cumulative count of the number of stripe units transferred in stripe
operations to each subvolume, as reported by irix.xlv.stripe_ops.
@ irix.xlv.aligned.full_width Aligned operations for stripe width transfers
The cumulative count for each XLV subvolume of the subset of the total
stripe operations, as reported by irix.xlv.stripe_ops, where
- the transfer begins on a stripe unit boundary
- the transfer ends on a stripe unit boundary
- the transfer involves _exactly_ one stripe width
The stripe width equals the stripe unit size times the number of disks in
the stripe.
These transfers are the most efficient in terms of alignment and the
potential for concurrency in the disk subsystem.
@ irix.xlv.aligned.lt_width Aligned operations for less than stripe width transfers
The cumulative count for each XLV subvolume of the subset of the total
stripe operations, as reported by irix.xlv.stripe_ops, where
- the transfer begins on a stripe unit boundary
- the transfer ends on a stripe unit boundary
- the transfer involves _less_ than one stripe width
The stripe width equals the stripe unit size times the number of disks in
the stripe.
These transfers are efficient in terms of alignment, but may produce
suboptimal balance and/or concurrency across the disks comprising the
stripe.
@ irix.xlv.aligned.gt_width Aligned operations for more than stripe width transfers
The cumulative count for each XLV subvolume of the subset of the total
stripe operations, as reported by irix.xlv.stripe_ops, where
- the transfer begins on a stripe unit boundary
- the transfer ends on a stripe unit boundary
- the transfer involves _more_ than one stripe width
The stripe width equals the stripe unit size times the number of disks in
the stripe.
These transfers are efficient in terms of alignment, but may produce
suboptimal balance and/or concurrency across the disks comprising the
stripe.
@ irix.xlv.aligned.part_unit Aligned operations for partial stripe unit transfers
The cumulative count for each XLV subvolume of the subset of the total
stripe operations, as reported by irix.xlv.stripe_ops, where
- the transfer begins on a stripe unit boundary
- the transfer does _not_ end on a stripe unit boundary
In this case the transfer may involve zero or more stripe units plus
a partial stripe unit.
These transfers are generally suboptimal in terms of transfer and
buffer alignment, balance and/or concurrency across the disks
comprising the stripe.
@ irix.xlv.unaligned.full_width Unaligned operations for stripe width transfers
The cumulative count for each XLV subvolume of the subset of the total
stripe operations, as reported by irix.xlv.stripe_ops, where
- the transfer does _not_ begin on a stripe unit boundary
- the transfer does _not_ end on a stripe unit boundary
- the transfer involves _exactly_ one stripe width
The stripe width equals the stripe unit size times the number of disks in
the stripe.
These transfers are suboptional with respect to transfer and buffer
alignment, but efficient in terms of the potential for concurrency in
the disk subsystem.
@ irix.xlv.unaligned.lt_width Unaligned operations for less than stripe width transfers
The cumulative count for each XLV subvolume of the subset of the total
stripe operations, as reported by irix.xlv.stripe_ops, where
- the transfer does _not_ begin on a stripe unit boundary
- the transfer ends on a stripe unit boundary
- the transfer involves _less_ than one stripe width
The stripe width equals the stripe unit size times the number of disks in
the stripe.
These transfers may be suboptional with respect to transfer and buffer
alignment, and may produce suboptimal balance and/or concurrency across
the disks comprising the stripe.
@ irix.xlv.unaligned.gt_width Unaligned operations for more than stripe width transfers
The cumulative count for each XLV subvolume of the subset of the total
stripe operations, as reported by irix.xlv.stripe_ops, where
- the transfer does _not_ begin on a stripe unit boundary
- the transfer ends on a stripe unit boundary
- the transfer involves _more_ than one stripe width
The stripe width equals the stripe unit size times the number of disks in
the stripe.
These transfers may be suboptional with respect to transfer and buffer
alignment, and may produce suboptimal balance and/or concurrency across
the disks comprising the stripe.
@ irix.xlv.unaligned.part_unit Unaligned operations for partial stripe unit transfers
The cumulative count for each XLV subvolume of the subset of the total
stripe operations, as reported by irix.xlv.stripe_ops, where
- the transfer does _not_ begin on a stripe unit boundary
- the transfer does _not_ end on a stripe unit boundary
- the transfer involves _more_ or _less_ than one stripe width
In this case the transfer may involve zero or more stripe units plus
a partial stripe unit.
These transfers the least efficient by all criteria.
@ irix.xlv.largest_io.stripes Number of stripe units in largest XLV transfer
The number of stripe units involved in the largest read or write so far
for each XLV subvolume.
@ irix.xlv.largest_io.count Number of times largest I/O has occurred
The number of times the largest read or write operation so far has
occurred for each XLV subvolume.